Apr 24 23:59:58.002648 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:59:58.002688 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:59:58.007239 kernel: BIOS-provided physical RAM map: Apr 24 23:59:58.007259 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:59:58.007271 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 24 23:59:58.007283 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 24 23:59:58.007297 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 24 23:59:58.007311 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 24 23:59:58.007323 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 24 23:59:58.007342 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 24 23:59:58.007355 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 24 23:59:58.007368 kernel: NX (Execute Disable) protection: active Apr 24 23:59:58.007381 kernel: APIC: Static calls initialized Apr 24 23:59:58.007394 kernel: efi: EFI v2.7 by EDK II Apr 24 23:59:58.007410 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 24 23:59:58.007428 kernel: SMBIOS 2.7 present. Apr 24 23:59:58.007442 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 24 23:59:58.007456 kernel: Hypervisor detected: KVM Apr 24 23:59:58.007470 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:59:58.007484 kernel: kvm-clock: using sched offset of 3786900462 cycles Apr 24 23:59:58.007500 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:59:58.007514 kernel: tsc: Detected 2499.996 MHz processor Apr 24 23:59:58.007529 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:59:58.007544 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:59:58.007559 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 24 23:59:58.007577 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:59:58.007591 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:59:58.007605 kernel: Using GB pages for direct mapping Apr 24 23:59:58.007619 kernel: Secure boot disabled Apr 24 23:59:58.007634 kernel: ACPI: Early table checksum verification disabled Apr 24 23:59:58.007648 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 24 23:59:58.007662 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 24 23:59:58.007677 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 24 23:59:58.007691 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 24 23:59:58.007709 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 24 23:59:58.007724 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 24 23:59:58.007738 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 24 23:59:58.007753 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 24 23:59:58.007767 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 24 23:59:58.007782 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 24 23:59:58.007803 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 24 23:59:58.007821 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 24 23:59:58.007837 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 24 23:59:58.007852 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 24 23:59:58.007868 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 24 23:59:58.007884 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 24 23:59:58.007899 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 24 23:59:58.007914 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 24 23:59:58.007933 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 24 23:59:58.007957 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 24 23:59:58.007972 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 24 23:59:58.007988 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 24 23:59:58.008022 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 24 23:59:58.008036 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 24 23:59:58.008049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:59:58.008060 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:59:58.008072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 24 23:59:58.008090 kernel: NUMA: Initialized distance table, cnt=1 Apr 24 23:59:58.008105 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 24 23:59:58.008120 kernel: Zone ranges: Apr 24 23:59:58.008136 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:59:58.008151 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 24 23:59:58.008167 kernel: Normal empty Apr 24 23:59:58.008182 kernel: Movable zone start for each node Apr 24 23:59:58.008196 kernel: Early memory node ranges Apr 24 23:59:58.008212 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:59:58.008232 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 24 23:59:58.008247 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 24 23:59:58.008262 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 24 23:59:58.008276 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:59:58.008292 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:59:58.008307 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 24 23:59:58.008323 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 24 23:59:58.008337 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 24 23:59:58.008350 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:59:58.008366 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 24 23:59:58.008385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:59:58.008401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:59:58.008416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:59:58.008432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:59:58.008447 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:59:58.008462 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:59:58.008477 kernel: TSC deadline timer available Apr 24 23:59:58.008492 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:59:58.008506 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:59:58.008524 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 24 23:59:58.008539 kernel: Booting paravirtualized kernel on KVM Apr 24 23:59:58.008555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:59:58.008571 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:59:58.008586 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:59:58.008602 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:59:58.008616 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:59:58.008632 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:59:58.008647 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:59:58.008668 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:59:58.008684 kernel: random: crng init done Apr 24 23:59:58.008699 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:59:58.008715 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:59:58.008730 kernel: Fallback order for Node 0: 0 Apr 24 23:59:58.008746 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 24 23:59:58.008761 kernel: Policy zone: DMA32 Apr 24 23:59:58.008777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:59:58.008796 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 162900K reserved, 0K cma-reserved) Apr 24 23:59:58.008812 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:59:58.008828 kernel: Kernel/User page tables isolation: enabled Apr 24 23:59:58.008844 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:59:58.008859 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:59:58.008874 kernel: Dynamic Preempt: voluntary Apr 24 23:59:58.008889 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:59:58.008905 kernel: rcu: RCU event tracing is enabled. Apr 24 23:59:58.008921 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:59:58.008940 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:59:58.008956 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:59:58.008972 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:59:58.008987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:59:58.009003 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:59:58.010050 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 23:59:58.010062 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:59:58.010090 kernel: Console: colour dummy device 80x25 Apr 24 23:59:58.010103 kernel: printk: console [tty0] enabled Apr 24 23:59:58.010116 kernel: printk: console [ttyS0] enabled Apr 24 23:59:58.010130 kernel: ACPI: Core revision 20230628 Apr 24 23:59:58.010143 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 24 23:59:58.010159 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:59:58.010173 kernel: x2apic enabled Apr 24 23:59:58.010185 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:59:58.010201 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 24 23:59:58.010215 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 24 23:59:58.010233 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:59:58.010249 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:59:58.010262 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:59:58.010276 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:59:58.010289 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:59:58.010303 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:59:58.010318 kernel: RETBleed: Vulnerable Apr 24 23:59:58.010333 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:59:58.010348 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:59:58.010362 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:59:58.012545 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:59:58.012577 kernel: active return thunk: its_return_thunk Apr 24 23:59:58.012591 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:59:58.012608 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:59:58.012625 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:59:58.012642 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:59:58.012658 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 24 23:59:58.012673 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 24 23:59:58.012688 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:59:58.012705 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:59:58.012721 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:59:58.012745 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 23:59:58.012762 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:59:58.012777 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 24 23:59:58.012794 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 24 23:59:58.012811 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 24 23:59:58.012827 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 24 23:59:58.012844 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 24 23:59:58.012860 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 24 23:59:58.012877 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 24 23:59:58.012894 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:59:58.012911 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:59:58.012931 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:59:58.012947 kernel: landlock: Up and running. Apr 24 23:59:58.012963 kernel: SELinux: Initializing. Apr 24 23:59:58.012980 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 24 23:59:58.012996 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 24 23:59:58.013035 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Apr 24 23:59:58.013052 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:59:58.013069 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:59:58.013086 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:59:58.013103 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:59:58.013125 kernel: signal: max sigframe size: 3632 Apr 24 23:59:58.013142 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:59:58.013160 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:59:58.013176 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:59:58.013193 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:59:58.013210 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:59:58.013227 kernel: .... node #0, CPUs: #1 Apr 24 23:59:58.013245 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 24 23:59:58.013263 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:59:58.013283 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:59:58.013300 kernel: smpboot: Max logical packages: 1 Apr 24 23:59:58.013318 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 24 23:59:58.013335 kernel: devtmpfs: initialized Apr 24 23:59:58.013352 kernel: x86/mm: Memory block size: 128MB Apr 24 23:59:58.013369 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 24 23:59:58.013386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:59:58.013403 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:59:58.013420 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:59:58.013440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:59:58.013456 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:59:58.013473 kernel: audit: type=2000 audit(1777075197.497:1): state=initialized audit_enabled=0 res=1 Apr 24 23:59:58.013490 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:59:58.013507 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:59:58.013524 kernel: cpuidle: using governor menu Apr 24 23:59:58.013541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:59:58.013559 kernel: dca service started, version 1.12.1 Apr 24 23:59:58.013576 kernel: PCI: Using configuration type 1 for base access Apr 24 23:59:58.013596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:59:58.013613 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:59:58.013630 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:59:58.013648 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:59:58.013664 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:59:58.013681 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:59:58.013698 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:59:58.013715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:59:58.013728 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 24 23:59:58.013744 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:59:58.013761 kernel: ACPI: Interpreter enabled Apr 24 23:59:58.013778 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:59:58.013795 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:59:58.013812 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:59:58.013829 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:59:58.013845 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 24 23:59:58.013862 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:59:58.015170 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:59:58.015352 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 24 23:59:58.015494 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 24 23:59:58.015514 kernel: acpiphp: Slot [3] registered Apr 24 23:59:58.015529 kernel: acpiphp: Slot [4] registered Apr 24 23:59:58.015544 kernel: acpiphp: Slot [5] registered Apr 24 23:59:58.015557 kernel: acpiphp: Slot [6] registered Apr 24 23:59:58.016073 kernel: acpiphp: Slot [7] registered Apr 24 23:59:58.016104 kernel: acpiphp: Slot [8] registered Apr 24 23:59:58.016118 kernel: acpiphp: Slot [9] registered Apr 24 23:59:58.016137 kernel: acpiphp: Slot [10] registered Apr 24 23:59:58.016158 kernel: acpiphp: Slot [11] registered Apr 24 23:59:58.016176 kernel: acpiphp: Slot [12] registered Apr 24 23:59:58.016194 kernel: acpiphp: Slot [13] registered Apr 24 23:59:58.016209 kernel: acpiphp: Slot [14] registered Apr 24 23:59:58.016224 kernel: acpiphp: Slot [15] registered Apr 24 23:59:58.016239 kernel: acpiphp: Slot [16] registered Apr 24 23:59:58.016254 kernel: acpiphp: Slot [17] registered Apr 24 23:59:58.016274 kernel: acpiphp: Slot [18] registered Apr 24 23:59:58.016289 kernel: acpiphp: Slot [19] registered Apr 24 23:59:58.016304 kernel: acpiphp: Slot [20] registered Apr 24 23:59:58.016320 kernel: acpiphp: Slot [21] registered Apr 24 23:59:58.016335 kernel: acpiphp: Slot [22] registered Apr 24 23:59:58.016350 kernel: acpiphp: Slot [23] registered Apr 24 23:59:58.016366 kernel: acpiphp: Slot [24] registered Apr 24 23:59:58.016381 kernel: acpiphp: Slot [25] registered Apr 24 23:59:58.016397 kernel: acpiphp: Slot [26] registered Apr 24 23:59:58.016415 kernel: acpiphp: Slot [27] registered Apr 24 23:59:58.016430 kernel: acpiphp: Slot [28] registered Apr 24 23:59:58.016446 kernel: acpiphp: Slot [29] registered Apr 24 23:59:58.016461 kernel: acpiphp: Slot [30] registered Apr 24 23:59:58.016476 kernel: acpiphp: Slot [31] registered Apr 24 23:59:58.016491 kernel: PCI host bridge to bus 0000:00 Apr 24 23:59:58.016678 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:59:58.016805 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:59:58.016930 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:59:58.018412 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 24 23:59:58.018544 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 24 23:59:58.018664 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:59:58.018819 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 24 23:59:58.018966 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 24 23:59:58.020993 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 24 23:59:58.021200 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 24 23:59:58.021357 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 24 23:59:58.021513 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 24 23:59:58.021662 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 24 23:59:58.021808 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 24 23:59:58.021946 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 24 23:59:58.022126 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 24 23:59:58.022301 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 24 23:59:58.022464 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 24 23:59:58.022606 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 24 23:59:58.022745 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 24 23:59:58.022897 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:59:58.023051 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 24 23:59:58.023192 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 24 23:59:58.023331 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 24 23:59:58.023462 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 24 23:59:58.023482 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:59:58.023499 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:59:58.023515 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:59:58.023531 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:59:58.023546 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 24 23:59:58.023565 kernel: iommu: Default domain type: Translated Apr 24 23:59:58.023581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:59:58.023596 kernel: efivars: Registered efivars operations Apr 24 23:59:58.023612 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:59:58.023627 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:59:58.023643 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 24 23:59:58.023658 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 24 23:59:58.023784 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 24 23:59:58.023920 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 24 23:59:58.024100 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:59:58.024121 kernel: vgaarb: loaded Apr 24 23:59:58.024136 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 24 23:59:58.024150 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 24 23:59:58.024165 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:59:58.024179 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:59:58.024193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:59:58.024208 kernel: pnp: PnP ACPI init Apr 24 23:59:58.024223 kernel: pnp: PnP ACPI: found 5 devices Apr 24 23:59:58.024243 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:59:58.024257 kernel: NET: Registered PF_INET protocol family Apr 24 23:59:58.024272 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:59:58.024287 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 24 23:59:58.024301 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:59:58.024316 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:59:58.024331 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 24 23:59:58.024346 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 24 23:59:58.024364 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 24 23:59:58.024380 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 24 23:59:58.024395 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:59:58.024409 kernel: NET: Registered PF_XDP protocol family Apr 24 23:59:58.024537 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:59:58.024657 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:59:58.024775 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:59:58.024892 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 24 23:59:58.025065 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 24 23:59:58.025209 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 24 23:59:58.025229 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:59:58.025245 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:59:58.025260 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 24 23:59:58.025276 kernel: clocksource: Switched to clocksource tsc Apr 24 23:59:58.025291 kernel: Initialise system trusted keyrings Apr 24 23:59:58.025306 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 24 23:59:58.025321 kernel: Key type asymmetric registered Apr 24 23:59:58.025340 kernel: Asymmetric key parser 'x509' registered Apr 24 23:59:58.025356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:59:58.025372 kernel: io scheduler mq-deadline registered Apr 24 23:59:58.025387 kernel: io scheduler kyber registered Apr 24 23:59:58.025404 kernel: io scheduler bfq registered Apr 24 23:59:58.025420 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:59:58.025436 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:59:58.025452 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:59:58.025468 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:59:58.025487 kernel: i8042: Warning: Keylock active Apr 24 23:59:58.025503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:59:58.025520 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:59:58.025669 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 24 23:59:58.025798 kernel: rtc_cmos 00:00: registered as rtc0 Apr 24 23:59:58.025919 kernel: rtc_cmos 00:00: setting system clock to 2026-04-24T23:59:57 UTC (1777075197) Apr 24 23:59:58.026124 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 24 23:59:58.026144 kernel: intel_pstate: CPU model not supported Apr 24 23:59:58.026165 kernel: efifb: probing for efifb Apr 24 23:59:58.026180 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 24 23:59:58.026196 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 24 23:59:58.026211 kernel: efifb: scrolling: redraw Apr 24 23:59:58.026227 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:59:58.026242 kernel: Console: switching to colour frame buffer device 100x37 Apr 24 23:59:58.026258 kernel: fb0: EFI VGA frame buffer device Apr 24 23:59:58.026273 kernel: pstore: Using crash dump compression: deflate Apr 24 23:59:58.026289 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:59:58.026307 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:59:58.026322 kernel: Segment Routing with IPv6 Apr 24 23:59:58.026337 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:59:58.026353 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:59:58.026368 kernel: Key type dns_resolver registered Apr 24 23:59:58.026383 kernel: IPI shorthand broadcast: enabled Apr 24 23:59:58.026422 kernel: sched_clock: Marking stable (575002903, 187394976)->(879149182, -116751303) Apr 24 23:59:58.026440 kernel: registered taskstats version 1 Apr 24 23:59:58.026457 kernel: Loading compiled-in X.509 certificates Apr 24 23:59:58.026475 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:59:58.026491 kernel: Key type .fscrypt registered Apr 24 23:59:58.026506 kernel: Key type fscrypt-provisioning registered Apr 24 23:59:58.026522 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:59:58.026538 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:59:58.026553 kernel: ima: No architecture policies found Apr 24 23:59:58.026569 kernel: clk: Disabling unused clocks Apr 24 23:59:58.026585 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:59:58.026601 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:59:58.026620 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:59:58.026636 kernel: Run /init as init process Apr 24 23:59:58.026652 kernel: with arguments: Apr 24 23:59:58.026668 kernel: /init Apr 24 23:59:58.026684 kernel: with environment: Apr 24 23:59:58.026699 kernel: HOME=/ Apr 24 23:59:58.026715 kernel: TERM=linux Apr 24 23:59:58.026733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:59:58.026753 systemd[1]: Detected virtualization amazon. Apr 24 23:59:58.026773 systemd[1]: Detected architecture x86-64. Apr 24 23:59:58.026789 systemd[1]: Running in initrd. Apr 24 23:59:58.026805 systemd[1]: No hostname configured, using default hostname. Apr 24 23:59:58.026820 systemd[1]: Hostname set to . Apr 24 23:59:58.026835 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:59:58.026851 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:59:58.026870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:59:58.026889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:59:58.026907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:59:58.026924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:59:58.026941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:59:58.026961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:59:58.026983 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:59:58.027000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:59:58.027050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:59:58.027068 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:59:58.027085 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:59:58.027102 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:59:58.027118 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:59:58.027135 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:59:58.027155 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:59:58.027173 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:59:58.027190 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:59:58.027206 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:59:58.027221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:59:58.027236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:59:58.027250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:59:58.027263 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:59:58.027281 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:59:58.027296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:59:58.027312 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:59:58.027330 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:59:58.027346 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:59:58.027360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:59:58.027374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:58.027389 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:59:58.027435 systemd-journald[179]: Collecting audit messages is disabled. Apr 24 23:59:58.027477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:59:58.027494 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:59:58.027512 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:59:58.027533 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:59:58.027551 systemd-journald[179]: Journal started Apr 24 23:59:58.027587 systemd-journald[179]: Runtime Journal (/run/log/journal/ec292fdec79edb9546a2d1ca3ce846e7) is 4.7M, max 38.2M, 33.4M free. Apr 24 23:59:58.033756 systemd-modules-load[180]: Inserted module 'overlay' Apr 24 23:59:58.047030 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:59:58.047863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:58.059255 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:59:58.065243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:59:58.070500 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:59:58.084913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:59:58.101651 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:59:58.101696 kernel: Bridge firewalling registered Apr 24 23:59:58.093229 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 24 23:59:58.103795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:59:58.106542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:59:58.109189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:59:58.116306 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:59:58.120243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:59:58.123219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:59:58.140629 dracut-cmdline[207]: dracut-dracut-053 Apr 24 23:59:58.142845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:59:58.144209 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:59:58.156295 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:59:58.200354 systemd-resolved[229]: Positive Trust Anchors: Apr 24 23:59:58.200376 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:59:58.200432 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:59:58.208788 systemd-resolved[229]: Defaulting to hostname 'linux'. Apr 24 23:59:58.212268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:59:58.213099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:59:58.242054 kernel: SCSI subsystem initialized Apr 24 23:59:58.253050 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:59:58.265053 kernel: iscsi: registered transport (tcp) Apr 24 23:59:58.287049 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:59:58.287128 kernel: QLogic iSCSI HBA Driver Apr 24 23:59:58.330066 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:59:58.338418 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:59:58.366676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:59:58.366757 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:59:58.366780 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:59:58.412058 kernel: raid6: avx512x4 gen() 17916 MB/s Apr 24 23:59:58.430148 kernel: raid6: avx512x2 gen() 17986 MB/s Apr 24 23:59:58.448144 kernel: raid6: avx512x1 gen() 17838 MB/s Apr 24 23:59:58.466060 kernel: raid6: avx2x4 gen() 17863 MB/s Apr 24 23:59:58.484051 kernel: raid6: avx2x2 gen() 18023 MB/s Apr 24 23:59:58.502915 kernel: raid6: avx2x1 gen() 13725 MB/s Apr 24 23:59:58.502994 kernel: raid6: using algorithm avx2x2 gen() 18023 MB/s Apr 24 23:59:58.521998 kernel: raid6: .... xor() 17739 MB/s, rmw enabled Apr 24 23:59:58.522086 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:59:58.546041 kernel: xor: automatically using best checksumming function avx Apr 24 23:59:58.709043 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:59:58.720676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:59:58.731347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:59:58.745227 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 24 23:59:58.750575 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:59:58.759481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:59:58.779428 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 24 23:59:58.812248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:59:58.818212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:59:58.870422 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:59:58.879298 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:59:58.908129 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:59:58.910871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:59:58.912100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:59:58.915150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:59:58.923347 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:59:58.955941 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:59:58.980083 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:59:58.987777 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 24 23:59:58.988106 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 24 23:59:59.011033 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 24 23:59:59.020267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:59:59.025510 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:59:59.025546 kernel: AES CTR mode by8 optimization enabled Apr 24 23:59:59.022865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:59:59.028440 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:59:59.029084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:59:59.029386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:59.032599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:59.043018 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:e6:e6:52:7d:83 Apr 24 23:59:59.046531 (udev-worker)[455]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:59:59.051225 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 24 23:59:59.051761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:59.062554 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 24 23:59:59.063413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:59:59.064498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:59.075033 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 24 23:59:59.096761 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:59:59.096838 kernel: GPT:9289727 != 33554431 Apr 24 23:59:59.096858 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:59:59.096879 kernel: GPT:9289727 != 33554431 Apr 24 23:59:59.096895 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:59:59.096913 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:59.099320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:59.117274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:59.122288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:59:59.139504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:59:59.178045 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (458) Apr 24 23:59:59.199069 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (449) Apr 24 23:59:59.271620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 24 23:59:59.284915 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 24 23:59:59.299412 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 24 23:59:59.300360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 24 23:59:59.308002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:59:59.314267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:59:59.325118 disk-uuid[629]: Primary Header is updated. Apr 24 23:59:59.325118 disk-uuid[629]: Secondary Entries is updated. Apr 24 23:59:59.325118 disk-uuid[629]: Secondary Header is updated. Apr 24 23:59:59.338553 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:59.361426 kernel: GPT:disk_guids don't match. Apr 24 23:59:59.361506 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:59:59.361520 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:59.382292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 25 00:00:00.375472 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 25 00:00:00.377304 disk-uuid[630]: The operation has completed successfully. Apr 25 00:00:01.283532 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 25 00:00:01.285232 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 25 00:00:01.525322 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 25 00:00:01.582601 sh[973]: Success Apr 25 00:00:01.672041 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 25 00:00:01.961633 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 25 00:00:02.001286 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 25 00:00:02.012832 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 25 00:00:02.261055 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 25 00:00:02.261809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:00:02.280444 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 25 00:00:02.321345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 25 00:00:02.324574 kernel: BTRFS info (device dm-0): using free space tree Apr 25 00:00:02.455685 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 25 00:00:02.471665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 25 00:00:02.487977 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 25 00:00:02.519303 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 25 00:00:02.541118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 25 00:00:02.667710 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:00:02.679867 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:00:02.679895 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 25 00:00:02.706112 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 25 00:00:02.802184 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:00:02.802504 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 25 00:00:02.840405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 25 00:00:02.866605 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 25 00:00:03.199407 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 00:00:03.207085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 00:00:03.231449 ignition[1078]: Ignition 2.19.0 Apr 25 00:00:03.231465 ignition[1078]: Stage: fetch-offline Apr 25 00:00:03.231896 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:03.231911 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:03.234782 ignition[1078]: Ignition finished successfully Apr 25 00:00:03.246816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 00:00:03.351824 systemd-networkd[1170]: lo: Link UP Apr 25 00:00:03.351847 systemd-networkd[1170]: lo: Gained carrier Apr 25 00:00:03.383118 systemd-networkd[1170]: Enumeration completed Apr 25 00:00:03.383931 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:00:03.383937 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 00:00:03.413828 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 00:00:03.424446 systemd-networkd[1170]: eth0: Link UP Apr 25 00:00:03.424453 systemd-networkd[1170]: eth0: Gained carrier Apr 25 00:00:03.424470 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:00:03.441931 systemd[1]: Reached target network.target - Network. Apr 25 00:00:03.483498 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.27.158/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 25 00:00:03.496413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 25 00:00:03.671159 ignition[1173]: Ignition 2.19.0 Apr 25 00:00:03.671178 ignition[1173]: Stage: fetch Apr 25 00:00:03.671708 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:03.671723 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:03.671857 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:03.743419 ignition[1173]: PUT result: OK Apr 25 00:00:03.761640 ignition[1173]: parsed url from cmdline: "" Apr 25 00:00:03.761657 ignition[1173]: no config URL provided Apr 25 00:00:03.761669 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 00:00:03.761686 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Apr 25 00:00:03.761713 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:03.766636 ignition[1173]: PUT result: OK Apr 25 00:00:03.766712 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 25 00:00:03.770319 ignition[1173]: GET result: OK Apr 25 00:00:03.770468 ignition[1173]: parsing config with SHA512: ac110028234997557a610db78806f43013207ced2b206d278f636ac54db304da8a57d780982be2a09e878bda671fc363f380405af4e375b2969cb4034a011b83 Apr 25 00:00:03.827772 unknown[1173]: fetched base config from "system" Apr 25 00:00:03.827793 unknown[1173]: fetched base config from "system" Apr 25 00:00:03.835296 ignition[1173]: fetch: fetch complete Apr 25 00:00:03.827806 unknown[1173]: fetched user config from "aws" Apr 25 00:00:03.835429 ignition[1173]: fetch: fetch passed Apr 25 00:00:03.840739 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 25 00:00:03.835513 ignition[1173]: Ignition finished successfully Apr 25 00:00:03.872289 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 25 00:00:04.016545 ignition[1180]: Ignition 2.19.0 Apr 25 00:00:04.016569 ignition[1180]: Stage: kargs Apr 25 00:00:04.017106 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:04.017121 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:04.017248 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:04.028526 ignition[1180]: PUT result: OK Apr 25 00:00:04.041691 ignition[1180]: kargs: kargs passed Apr 25 00:00:04.042224 ignition[1180]: Ignition finished successfully Apr 25 00:00:04.052077 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 25 00:00:04.061266 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 25 00:00:04.113987 ignition[1186]: Ignition 2.19.0 Apr 25 00:00:04.114024 ignition[1186]: Stage: disks Apr 25 00:00:04.114636 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:04.114652 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:04.115069 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:04.128233 ignition[1186]: PUT result: OK Apr 25 00:00:04.176081 ignition[1186]: disks: disks passed Apr 25 00:00:04.176195 ignition[1186]: Ignition finished successfully Apr 25 00:00:04.202610 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 25 00:00:04.224047 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 25 00:00:04.224528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 25 00:00:04.225791 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 00:00:04.229872 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 00:00:04.235499 systemd[1]: Reached target basic.target - Basic System. Apr 25 00:00:04.261978 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 25 00:00:04.401692 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 25 00:00:04.418394 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 25 00:00:04.474611 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 25 00:00:04.637172 systemd-networkd[1170]: eth0: Gained IPv6LL Apr 25 00:00:05.137055 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 25 00:00:05.141978 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 25 00:00:05.150115 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 25 00:00:05.170179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 00:00:05.175160 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 25 00:00:05.190660 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 25 00:00:05.190738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 25 00:00:05.190776 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 00:00:05.275686 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1214) Apr 25 00:00:05.284997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 25 00:00:05.346528 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:00:05.346568 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:00:05.349287 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 25 00:00:05.372053 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 25 00:00:05.387548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 25 00:00:05.401267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 00:00:06.037027 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Apr 25 00:00:06.050888 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Apr 25 00:00:06.064435 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Apr 25 00:00:06.070823 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Apr 25 00:00:06.635691 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 25 00:00:06.655202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 25 00:00:06.673922 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 25 00:00:06.732960 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:00:06.734807 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 25 00:00:06.821824 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 25 00:00:06.832594 ignition[1327]: INFO : Ignition 2.19.0 Apr 25 00:00:06.832594 ignition[1327]: INFO : Stage: mount Apr 25 00:00:06.834354 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:06.834354 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:06.834354 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:06.834354 ignition[1327]: INFO : PUT result: OK Apr 25 00:00:06.837203 ignition[1327]: INFO : mount: mount passed Apr 25 00:00:06.837896 ignition[1327]: INFO : Ignition finished successfully Apr 25 00:00:06.839883 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 25 00:00:06.846293 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 25 00:00:06.866299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 00:00:06.965057 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1338) Apr 25 00:00:06.989999 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:00:06.990093 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:00:07.006033 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 25 00:00:07.033137 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 25 00:00:07.046036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 00:00:07.147243 ignition[1355]: INFO : Ignition 2.19.0 Apr 25 00:00:07.147243 ignition[1355]: INFO : Stage: files Apr 25 00:00:07.161707 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:07.161707 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:07.161707 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:07.161707 ignition[1355]: INFO : PUT result: OK Apr 25 00:00:07.183940 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Apr 25 00:00:07.185376 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 25 00:00:07.185376 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 25 00:00:07.204151 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 25 00:00:07.207743 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 25 00:00:07.207743 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 25 00:00:07.204809 unknown[1355]: wrote ssh authorized keys file for user: core Apr 25 00:00:07.228653 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 25 00:00:07.228653 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 25 00:00:07.382633 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 25 00:00:07.709470 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 00:00:07.738035 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 00:00:07.898815 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 00:00:07.898815 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 00:00:07.898815 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:00:07.898815 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:00:07.898815 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:00:07.898815 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 25 00:00:08.478768 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 25 00:00:09.415089 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:00:09.415089 ignition[1355]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 25 00:00:09.419985 ignition[1355]: INFO : files: files passed Apr 25 00:00:09.433225 ignition[1355]: INFO : Ignition finished successfully Apr 25 00:00:09.423312 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 25 00:00:09.434373 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 25 00:00:09.439228 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 25 00:00:09.444352 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 25 00:00:09.444496 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 25 00:00:09.511549 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:00:09.511549 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:00:09.524452 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:00:09.526496 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 00:00:09.528296 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 25 00:00:09.541147 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 25 00:00:09.697133 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 25 00:00:09.697556 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 25 00:00:09.699738 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 25 00:00:09.701388 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 25 00:00:09.702485 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 25 00:00:09.710571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 25 00:00:09.744546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 00:00:09.763328 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 25 00:00:09.819367 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:00:09.821187 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:00:09.822443 systemd[1]: Stopped target timers.target - Timer Units. Apr 25 00:00:09.826703 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 25 00:00:09.827631 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 00:00:09.830314 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 25 00:00:09.834508 systemd[1]: Stopped target basic.target - Basic System. Apr 25 00:00:09.835633 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 25 00:00:09.838242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 00:00:09.839380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 25 00:00:09.840403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 25 00:00:09.841573 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 00:00:09.842512 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 25 00:00:09.844186 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 25 00:00:09.845105 systemd[1]: Stopped target swap.target - Swaps. Apr 25 00:00:09.846308 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 25 00:00:09.846646 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 25 00:00:09.847952 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:00:09.848958 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:00:09.849690 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 25 00:00:09.849849 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:00:09.850599 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 25 00:00:09.850948 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 25 00:00:09.852414 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 25 00:00:09.852761 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 00:00:09.854295 systemd[1]: ignition-files.service: Deactivated successfully. Apr 25 00:00:09.854656 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 25 00:00:09.865478 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 25 00:00:09.871588 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 25 00:00:09.875132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 25 00:00:09.875591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:00:09.877496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 25 00:00:09.879304 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 00:00:09.895444 ignition[1408]: INFO : Ignition 2.19.0 Apr 25 00:00:09.895648 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 25 00:00:09.895874 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 25 00:00:09.901832 ignition[1408]: INFO : Stage: umount Apr 25 00:00:09.901832 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:00:09.901832 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 25 00:00:09.901832 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 25 00:00:09.908115 ignition[1408]: INFO : PUT result: OK Apr 25 00:00:09.909512 ignition[1408]: INFO : umount: umount passed Apr 25 00:00:09.911353 ignition[1408]: INFO : Ignition finished successfully Apr 25 00:00:09.913925 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 25 00:00:09.914117 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 25 00:00:09.919123 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 25 00:00:09.919512 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 25 00:00:09.921912 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 25 00:00:09.921992 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 25 00:00:09.922565 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 25 00:00:09.922620 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 25 00:00:09.925168 systemd[1]: Stopped target network.target - Network. Apr 25 00:00:09.925889 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 25 00:00:09.925999 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 00:00:09.926498 systemd[1]: Stopped target paths.target - Path Units. Apr 25 00:00:09.928942 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 25 00:00:09.934295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:00:09.936889 systemd[1]: Stopped target slices.target - Slice Units. Apr 25 00:00:09.938156 systemd[1]: Stopped target sockets.target - Socket Units. Apr 25 00:00:09.939803 systemd[1]: iscsid.socket: Deactivated successfully. Apr 25 00:00:09.941941 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 00:00:09.942832 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 25 00:00:09.942899 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 00:00:09.943817 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 25 00:00:09.944278 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 25 00:00:09.944910 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 25 00:00:09.944996 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 25 00:00:09.945991 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 25 00:00:09.947566 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 25 00:00:09.951511 systemd-networkd[1170]: eth0: DHCPv6 lease lost Apr 25 00:00:09.953761 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 25 00:00:09.958582 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 25 00:00:09.958731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 25 00:00:09.962807 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 25 00:00:09.963280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 25 00:00:09.965584 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 25 00:00:09.965676 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:00:09.971221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 25 00:00:09.971964 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 25 00:00:09.972088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 00:00:09.973327 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 00:00:09.973409 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:00:09.974261 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 25 00:00:09.974331 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 25 00:00:09.975527 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 25 00:00:09.975597 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:00:09.977104 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:00:09.993429 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 25 00:00:09.993649 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:00:09.998416 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 25 00:00:09.998516 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 25 00:00:10.000898 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 25 00:00:10.001906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:00:10.004735 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 25 00:00:10.005095 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 25 00:00:10.006282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 25 00:00:10.006355 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 25 00:00:10.008613 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 00:00:10.008692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:00:10.019557 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 25 00:00:10.022061 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 25 00:00:10.022941 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:00:10.023653 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 25 00:00:10.023740 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 00:00:10.026175 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 25 00:00:10.026253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:00:10.026848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:00:10.026911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:00:10.031577 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 25 00:00:10.031740 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 25 00:00:10.041328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 25 00:00:10.041453 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 25 00:00:10.122456 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 25 00:00:10.122615 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 25 00:00:10.127916 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 25 00:00:10.136357 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 25 00:00:10.136476 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 25 00:00:10.148402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 25 00:00:10.191244 systemd[1]: Switching root. Apr 25 00:00:10.256521 systemd-journald[179]: Journal stopped Apr 25 00:00:11.977309 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 25 00:00:11.977431 kernel: SELinux: policy capability network_peer_controls=1 Apr 25 00:00:11.977465 kernel: SELinux: policy capability open_perms=1 Apr 25 00:00:11.977489 kernel: SELinux: policy capability extended_socket_class=1 Apr 25 00:00:11.977513 kernel: SELinux: policy capability always_check_network=0 Apr 25 00:00:11.977535 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 25 00:00:11.977561 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 25 00:00:11.977583 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 25 00:00:11.977606 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 25 00:00:11.977636 kernel: audit: type=1403 audit(1777075210.601:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 25 00:00:11.977674 systemd[1]: Successfully loaded SELinux policy in 75.912ms. Apr 25 00:00:11.977711 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.713ms. Apr 25 00:00:11.977737 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 00:00:11.977761 systemd[1]: Detected virtualization amazon. Apr 25 00:00:11.977783 systemd[1]: Detected architecture x86-64. Apr 25 00:00:11.977804 systemd[1]: Detected first boot. Apr 25 00:00:11.977829 systemd[1]: Initializing machine ID from VM UUID. Apr 25 00:00:11.977854 zram_generator::config[1451]: No configuration found. Apr 25 00:00:11.977880 systemd[1]: Populated /etc with preset unit settings. Apr 25 00:00:11.977907 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 25 00:00:11.977933 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 25 00:00:11.977959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 25 00:00:11.977984 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 25 00:00:11.983128 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 25 00:00:11.983176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 25 00:00:11.983196 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 25 00:00:11.983215 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 25 00:00:11.983245 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 25 00:00:11.983271 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 25 00:00:11.983291 systemd[1]: Created slice user.slice - User and Session Slice. Apr 25 00:00:11.983318 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:00:11.983341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:00:11.983360 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 25 00:00:11.983380 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 25 00:00:11.983402 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 25 00:00:11.983424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 00:00:11.983445 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 25 00:00:11.983476 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:00:11.983497 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 25 00:00:11.983518 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 25 00:00:11.983539 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 25 00:00:11.983560 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 25 00:00:11.983582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:00:11.983604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 00:00:11.983628 systemd[1]: Reached target slices.target - Slice Units. Apr 25 00:00:11.983649 systemd[1]: Reached target swap.target - Swaps. Apr 25 00:00:11.983669 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 25 00:00:11.983690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 25 00:00:11.983711 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:00:11.983735 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 00:00:11.983756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:00:11.983776 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 25 00:00:11.983798 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 25 00:00:11.983819 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 25 00:00:11.983844 systemd[1]: Mounting media.mount - External Media Directory... Apr 25 00:00:11.983881 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:00:11.983903 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 25 00:00:11.983924 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 25 00:00:11.983945 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 25 00:00:11.983966 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 25 00:00:11.983988 systemd[1]: Reached target machines.target - Containers. Apr 25 00:00:11.987127 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 25 00:00:11.987177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:00:11.987198 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 00:00:11.987218 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 25 00:00:11.987239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:00:11.987258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 00:00:11.987278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:00:11.987296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 25 00:00:11.987313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:00:11.987340 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 25 00:00:11.987361 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 25 00:00:11.987382 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 25 00:00:11.987404 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 25 00:00:11.987426 systemd[1]: Stopped systemd-fsck-usr.service. Apr 25 00:00:11.987446 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 00:00:11.987466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 00:00:11.987488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 25 00:00:11.987509 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 25 00:00:11.987532 kernel: fuse: init (API version 7.39) Apr 25 00:00:11.987553 kernel: loop: module loaded Apr 25 00:00:11.987574 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 00:00:11.987596 systemd[1]: verity-setup.service: Deactivated successfully. Apr 25 00:00:11.987616 systemd[1]: Stopped verity-setup.service. Apr 25 00:00:11.987638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:00:11.987660 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 25 00:00:11.987680 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 25 00:00:11.987701 systemd[1]: Mounted media.mount - External Media Directory. Apr 25 00:00:11.987726 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 25 00:00:11.987747 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 25 00:00:11.987769 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 25 00:00:11.987789 kernel: ACPI: bus type drm_connector registered Apr 25 00:00:11.987809 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:00:11.987835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 25 00:00:11.987856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 25 00:00:11.987890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:00:11.987912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:00:11.989267 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 00:00:11.989298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 00:00:11.989318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:00:11.989338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:00:11.989358 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 25 00:00:11.989377 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 25 00:00:11.989409 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:00:11.989431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:00:11.989454 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 25 00:00:11.989476 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 25 00:00:11.989505 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 25 00:00:11.989528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 25 00:00:11.989550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:00:11.989615 systemd-journald[1533]: Collecting audit messages is disabled. Apr 25 00:00:11.989659 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 00:00:11.989683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 00:00:11.989707 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 25 00:00:11.989735 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 25 00:00:11.989759 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 25 00:00:11.989778 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 25 00:00:11.989796 systemd-journald[1533]: Journal started Apr 25 00:00:11.989837 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec292fdec79edb9546a2d1ca3ce846e7) is 4.7M, max 38.2M, 33.4M free. Apr 25 00:00:11.461765 systemd[1]: Queued start job for default target multi-user.target. Apr 25 00:00:11.995827 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 00:00:11.490485 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 25 00:00:11.490953 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 25 00:00:12.021321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 25 00:00:12.021396 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 00:00:12.025872 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 25 00:00:12.040386 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 25 00:00:12.045210 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 25 00:00:12.046200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:00:12.052254 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 25 00:00:12.056244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 25 00:00:12.058222 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:00:12.060649 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 25 00:00:12.066309 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Apr 25 00:00:12.066341 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Apr 25 00:00:12.069319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:00:12.081295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 25 00:00:12.084851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:00:12.085967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 00:00:12.088374 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 25 00:00:12.103160 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec292fdec79edb9546a2d1ca3ce846e7 is 124.559ms for 990 entries. Apr 25 00:00:12.103160 systemd-journald[1533]: System Journal (/var/log/journal/ec292fdec79edb9546a2d1ca3ce846e7) is 8.0M, max 195.6M, 187.6M free. Apr 25 00:00:12.249357 systemd-journald[1533]: Received client request to flush runtime journal. Apr 25 00:00:12.249455 kernel: loop0: detected capacity change from 0 to 140768 Apr 25 00:00:12.251001 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 25 00:00:12.103234 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 25 00:00:12.109259 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 25 00:00:12.141885 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 25 00:00:12.144410 udevadm[1589]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 25 00:00:12.145736 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 25 00:00:12.153227 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 25 00:00:12.202018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:00:12.254091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 25 00:00:12.254946 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 25 00:00:12.256617 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 25 00:00:12.263220 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 25 00:00:12.273298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 00:00:12.284120 kernel: loop1: detected capacity change from 0 to 61336 Apr 25 00:00:12.326568 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Apr 25 00:00:12.326597 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Apr 25 00:00:12.336101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:00:12.340050 kernel: loop2: detected capacity change from 0 to 228704 Apr 25 00:00:12.459677 kernel: loop3: detected capacity change from 0 to 142488 Apr 25 00:00:12.592844 kernel: loop4: detected capacity change from 0 to 140768 Apr 25 00:00:12.630044 kernel: loop5: detected capacity change from 0 to 61336 Apr 25 00:00:12.649109 kernel: loop6: detected capacity change from 0 to 228704 Apr 25 00:00:12.700044 kernel: loop7: detected capacity change from 0 to 142488 Apr 25 00:00:12.730956 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 25 00:00:12.732357 (sd-merge)[1610]: Merged extensions into '/usr'. Apr 25 00:00:12.741588 systemd[1]: Reloading requested from client PID 1584 ('systemd-sysext') (unit systemd-sysext.service)... Apr 25 00:00:12.741768 systemd[1]: Reloading... Apr 25 00:00:12.920046 zram_generator::config[1636]: No configuration found. Apr 25 00:00:13.152706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:00:13.239996 systemd[1]: Reloading finished in 497 ms. Apr 25 00:00:13.266442 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 25 00:00:13.271308 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 25 00:00:13.280338 systemd[1]: Starting ensure-sysext.service... Apr 25 00:00:13.283460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 00:00:13.304505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:00:13.323232 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit ensure-sysext.service)... Apr 25 00:00:13.323263 systemd[1]: Reloading... Apr 25 00:00:13.323688 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 25 00:00:13.324272 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 25 00:00:13.325669 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 25 00:00:13.326144 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Apr 25 00:00:13.326249 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Apr 25 00:00:13.338936 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 00:00:13.341055 systemd-tmpfiles[1689]: Skipping /boot Apr 25 00:00:13.368941 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 00:00:13.368967 systemd-tmpfiles[1689]: Skipping /boot Apr 25 00:00:13.401251 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Apr 25 00:00:13.454844 ldconfig[1579]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 25 00:00:13.477032 zram_generator::config[1718]: No configuration found. Apr 25 00:00:13.610297 (udev-worker)[1742]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:00:13.723876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:00:13.743942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 25 00:00:13.744408 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 25 00:00:13.779042 kernel: ACPI: button: Power Button [PWRF] Apr 25 00:00:13.779119 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 25 00:00:13.790039 kernel: ACPI: button: Sleep Button [SLPF] Apr 25 00:00:13.807054 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Apr 25 00:00:13.853033 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1756) Apr 25 00:00:13.914478 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 25 00:00:13.916495 systemd[1]: Reloading finished in 592 ms. Apr 25 00:00:13.942595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:00:13.945573 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 25 00:00:13.947633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:00:14.008043 kernel: mousedev: PS/2 mouse device common for all mice Apr 25 00:00:14.057384 systemd[1]: Finished ensure-sysext.service. Apr 25 00:00:14.066103 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:00:14.074343 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 00:00:14.081841 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 25 00:00:14.085054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:00:14.090251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:00:14.092373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 00:00:14.098846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:00:14.102838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:00:14.103730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:00:14.110452 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 25 00:00:14.116290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 00:00:14.128271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 00:00:14.128943 systemd[1]: Reached target time-set.target - System Time Set. Apr 25 00:00:14.133073 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 25 00:00:14.141447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:00:14.142249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:00:14.143733 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 25 00:00:14.145625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:00:14.145854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:00:14.147497 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 00:00:14.147707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 00:00:14.170427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:00:14.171084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:00:14.186302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 25 00:00:14.192254 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 25 00:00:14.195348 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 25 00:00:14.196855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:00:14.204256 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 25 00:00:14.205830 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:00:14.206428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:00:14.219238 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:00:14.228266 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 25 00:00:14.269761 lvm[1905]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 00:00:14.272035 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 25 00:00:14.278620 augenrules[1918]: No rules Apr 25 00:00:14.282493 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 00:00:14.307078 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 25 00:00:14.309229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:00:14.320305 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 25 00:00:14.325002 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 25 00:00:14.336576 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 25 00:00:14.346433 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 25 00:00:14.352079 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 25 00:00:14.355163 lvm[1926]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 00:00:14.353784 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 25 00:00:14.394224 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 25 00:00:14.397173 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 25 00:00:14.404503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:00:14.462354 systemd-networkd[1895]: lo: Link UP Apr 25 00:00:14.462729 systemd-networkd[1895]: lo: Gained carrier Apr 25 00:00:14.464570 systemd-networkd[1895]: Enumeration completed Apr 25 00:00:14.465170 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 00:00:14.466079 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:00:14.467160 systemd-networkd[1895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 00:00:14.472451 systemd-networkd[1895]: eth0: Link UP Apr 25 00:00:14.472681 systemd-networkd[1895]: eth0: Gained carrier Apr 25 00:00:14.472712 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:00:14.473615 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 25 00:00:14.482837 systemd-resolved[1896]: Positive Trust Anchors: Apr 25 00:00:14.482866 systemd-resolved[1896]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 00:00:14.482917 systemd-resolved[1896]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 00:00:14.487112 systemd-networkd[1895]: eth0: DHCPv4 address 172.31.27.158/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 25 00:00:14.489671 systemd-resolved[1896]: Defaulting to hostname 'linux'. Apr 25 00:00:14.492551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 00:00:14.493324 systemd[1]: Reached target network.target - Network. Apr 25 00:00:14.493903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:00:14.494563 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 00:00:14.495160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 25 00:00:14.495617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 25 00:00:14.496292 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 25 00:00:14.496792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 25 00:00:14.497260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 25 00:00:14.497652 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 25 00:00:14.497701 systemd[1]: Reached target paths.target - Path Units. Apr 25 00:00:14.498125 systemd[1]: Reached target timers.target - Timer Units. Apr 25 00:00:14.499435 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 25 00:00:14.501454 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 25 00:00:14.507363 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 25 00:00:14.508722 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 25 00:00:14.509433 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 00:00:14.509908 systemd[1]: Reached target basic.target - Basic System. Apr 25 00:00:14.510406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 25 00:00:14.510495 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 25 00:00:14.511758 systemd[1]: Starting containerd.service - containerd container runtime... Apr 25 00:00:14.517239 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 25 00:00:14.521181 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 25 00:00:14.527950 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 25 00:00:14.531884 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 25 00:00:14.534096 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 25 00:00:14.544411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 25 00:00:14.557265 systemd[1]: Started ntpd.service - Network Time Service. Apr 25 00:00:14.582037 jq[1948]: false Apr 25 00:00:14.594151 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 25 00:00:14.606604 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 25 00:00:14.616755 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 25 00:00:14.618058 dbus-daemon[1947]: [system] SELinux support is enabled Apr 25 00:00:14.624425 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 25 00:00:14.637271 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 25 00:00:14.636267 dbus-daemon[1947]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1895 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 25 00:00:14.638713 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 25 00:00:14.639680 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 25 00:00:14.645160 extend-filesystems[1949]: Found loop4 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found loop5 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found loop6 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found loop7 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p1 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p2 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p3 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found usr Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p4 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p6 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p7 Apr 25 00:00:14.645160 extend-filesystems[1949]: Found nvme0n1p9 Apr 25 00:00:14.761642 extend-filesystems[1949]: Checking size of /dev/nvme0n1p9 Apr 25 00:00:14.761642 extend-filesystems[1949]: Resized partition /dev/nvme0n1p9 Apr 25 00:00:14.645520 systemd[1]: Starting update-engine.service - Update Engine... Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.725 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.727 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.728 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.728 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.733 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.733 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.738 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.739 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.739 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.744 INFO Fetch failed with 404: resource not found Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.747 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.749 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.767 INFO Fetch successful Apr 25 00:00:14.767374 coreos-metadata[1946]: Apr 25 00:00:14.767 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 25 00:00:14.695231 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:46:02 UTC 2026 (1): Starting Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:46:02 UTC 2026 (1): Starting Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: ---------------------------------------------------- Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: corporation. Support and training for ntp-4 are Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: available at https://www.nwtime.org/support Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: ---------------------------------------------------- Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: proto: precision = 0.077 usec (-24) Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: basedate set to 2026-04-12 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: gps base set to 2026-04-12 (week 2414) Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Listen normally on 3 eth0 172.31.27.158:123 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Listen normally on 4 lo [::1]:123 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: bind(21) AF_INET6 fe80::4e6:e6ff:fe52:7d83%2#123 flags 0x11 failed: Cannot assign requested address Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: unable to create socket on eth0 (5) for fe80::4e6:e6ff:fe52:7d83%2#123 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: failed to init interface for address fe80::4e6:e6ff:fe52:7d83%2 Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 25 00:00:14.776727 ntpd[1951]: 25 Apr 00:00:14 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 25 00:00:14.799945 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 25 00:00:14.800044 extend-filesystems[1988]: resize2fs 1.47.1 (20-May-2024) Apr 25 00:00:14.650818 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 25 00:00:14.811583 coreos-metadata[1946]: Apr 25 00:00:14.768 INFO Fetch successful Apr 25 00:00:14.811583 coreos-metadata[1946]: Apr 25 00:00:14.768 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 25 00:00:14.811583 coreos-metadata[1946]: Apr 25 00:00:14.772 INFO Fetch successful Apr 25 00:00:14.695256 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 25 00:00:14.661949 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 25 00:00:14.695266 ntpd[1951]: ---------------------------------------------------- Apr 25 00:00:14.677562 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 25 00:00:14.695276 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Apr 25 00:00:14.824324 tar[1973]: linux-amd64/LICENSE Apr 25 00:00:14.824324 tar[1973]: linux-amd64/helm Apr 25 00:00:14.678202 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 25 00:00:14.695284 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 25 00:00:14.694627 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 25 00:00:14.695294 ntpd[1951]: corporation. Support and training for ntp-4 are Apr 25 00:00:14.694675 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 25 00:00:14.695303 ntpd[1951]: available at https://www.nwtime.org/support Apr 25 00:00:14.696394 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 25 00:00:14.695315 ntpd[1951]: ---------------------------------------------------- Apr 25 00:00:14.840259 jq[1969]: true Apr 25 00:00:14.696426 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 25 00:00:14.698083 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 25 00:00:14.721398 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 25 00:00:14.702253 ntpd[1951]: proto: precision = 0.077 usec (-24) Apr 25 00:00:14.730666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 25 00:00:14.703062 ntpd[1951]: basedate set to 2026-04-12 Apr 25 00:00:14.730940 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 25 00:00:14.703082 ntpd[1951]: gps base set to 2026-04-12 (week 2414) Apr 25 00:00:14.736306 systemd[1]: motdgen.service: Deactivated successfully. Apr 25 00:00:14.709396 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Apr 25 00:00:14.738214 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 25 00:00:14.712117 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 25 00:00:14.835828 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 25 00:00:14.712353 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Apr 25 00:00:14.862367 update_engine[1967]: I20260425 00:00:14.859635 1967 main.cc:92] Flatcar Update Engine starting Apr 25 00:00:14.712397 ntpd[1951]: Listen normally on 3 eth0 172.31.27.158:123 Apr 25 00:00:14.712443 ntpd[1951]: Listen normally on 4 lo [::1]:123 Apr 25 00:00:14.867560 systemd[1]: Started update-engine.service - Update Engine. Apr 25 00:00:14.712495 ntpd[1951]: bind(21) AF_INET6 fe80::4e6:e6ff:fe52:7d83%2#123 flags 0x11 failed: Cannot assign requested address Apr 25 00:00:14.712517 ntpd[1951]: unable to create socket on eth0 (5) for fe80::4e6:e6ff:fe52:7d83%2#123 Apr 25 00:00:14.712533 ntpd[1951]: failed to init interface for address fe80::4e6:e6ff:fe52:7d83%2 Apr 25 00:00:14.891289 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 25 00:00:14.891346 jq[1994]: true Apr 25 00:00:14.882618 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 25 00:00:14.891715 update_engine[1967]: I20260425 00:00:14.876600 1967 update_check_scheduler.cc:74] Next update check in 11m10s Apr 25 00:00:14.712565 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Apr 25 00:00:14.720512 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 25 00:00:14.720556 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 25 00:00:14.905112 extend-filesystems[1988]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 25 00:00:14.905112 extend-filesystems[1988]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 25 00:00:14.905112 extend-filesystems[1988]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 25 00:00:14.894617 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 25 00:00:14.913696 extend-filesystems[1949]: Resized filesystem in /dev/nvme0n1p9 Apr 25 00:00:14.894864 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 25 00:00:14.923050 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 25 00:00:14.935932 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 25 00:00:14.939675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 25 00:00:15.024757 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1732) Apr 25 00:00:15.124193 systemd-logind[1965]: Watching system buttons on /dev/input/event1 (Power Button) Apr 25 00:00:15.124740 systemd-logind[1965]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 25 00:00:15.124913 systemd-logind[1965]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 25 00:00:15.126545 systemd-logind[1965]: New seat seat0. Apr 25 00:00:15.128540 systemd[1]: Started systemd-logind.service - User Login Management. Apr 25 00:00:15.135547 bash[2040]: Updated "/home/core/.ssh/authorized_keys" Apr 25 00:00:15.141895 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 25 00:00:15.151488 systemd[1]: Starting sshkeys.service... Apr 25 00:00:15.151901 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 25 00:00:15.154375 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 25 00:00:15.153673 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1983 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 25 00:00:15.168507 systemd[1]: Starting polkit.service - Authorization Manager... Apr 25 00:00:15.203202 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 25 00:00:15.214366 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 25 00:00:15.281401 polkitd[2054]: Started polkitd version 121 Apr 25 00:00:15.311378 polkitd[2054]: Loading rules from directory /etc/polkit-1/rules.d Apr 25 00:00:15.319296 polkitd[2054]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 25 00:00:15.322694 polkitd[2054]: Finished loading, compiling and executing 2 rules Apr 25 00:00:15.328312 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 25 00:00:15.328531 systemd[1]: Started polkit.service - Authorization Manager. Apr 25 00:00:15.330549 polkitd[2054]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 25 00:00:15.362434 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 25 00:00:15.384221 coreos-metadata[2060]: Apr 25 00:00:15.373 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 25 00:00:15.384221 coreos-metadata[2060]: Apr 25 00:00:15.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 25 00:00:15.384614 coreos-metadata[2060]: Apr 25 00:00:15.384 INFO Fetch successful Apr 25 00:00:15.384614 coreos-metadata[2060]: Apr 25 00:00:15.384 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 25 00:00:15.386598 coreos-metadata[2060]: Apr 25 00:00:15.385 INFO Fetch successful Apr 25 00:00:15.387049 unknown[2060]: wrote ssh authorized keys file for user: core Apr 25 00:00:15.410471 systemd-resolved[1896]: System hostname changed to 'ip-172-31-27-158'. Apr 25 00:00:15.411651 systemd-hostnamed[1983]: Hostname set to (transient) Apr 25 00:00:15.453188 locksmithd[2007]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 25 00:00:15.460100 update-ssh-keys[2110]: Updated "/home/core/.ssh/authorized_keys" Apr 25 00:00:15.461226 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 25 00:00:15.466345 systemd[1]: Finished sshkeys.service. Apr 25 00:00:15.638205 containerd[1990]: time="2026-04-25T00:00:15.638001162Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 25 00:00:15.696203 ntpd[1951]: bind(24) AF_INET6 fe80::4e6:e6ff:fe52:7d83%2#123 flags 0x11 failed: Cannot assign requested address Apr 25 00:00:15.696254 ntpd[1951]: unable to create socket on eth0 (6) for fe80::4e6:e6ff:fe52:7d83%2#123 Apr 25 00:00:15.696627 ntpd[1951]: 25 Apr 00:00:15 ntpd[1951]: bind(24) AF_INET6 fe80::4e6:e6ff:fe52:7d83%2#123 flags 0x11 failed: Cannot assign requested address Apr 25 00:00:15.696627 ntpd[1951]: 25 Apr 00:00:15 ntpd[1951]: unable to create socket on eth0 (6) for fe80::4e6:e6ff:fe52:7d83%2#123 Apr 25 00:00:15.696627 ntpd[1951]: 25 Apr 00:00:15 ntpd[1951]: failed to init interface for address fe80::4e6:e6ff:fe52:7d83%2 Apr 25 00:00:15.696268 ntpd[1951]: failed to init interface for address fe80::4e6:e6ff:fe52:7d83%2 Apr 25 00:00:15.732354 containerd[1990]: time="2026-04-25T00:00:15.732074933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739070 containerd[1990]: time="2026-04-25T00:00:15.738984452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739070 containerd[1990]: time="2026-04-25T00:00:15.739067668Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 25 00:00:15.739216 containerd[1990]: time="2026-04-25T00:00:15.739090663Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 25 00:00:15.739299 containerd[1990]: time="2026-04-25T00:00:15.739278863Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 25 00:00:15.739341 containerd[1990]: time="2026-04-25T00:00:15.739308233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739403 containerd[1990]: time="2026-04-25T00:00:15.739383518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739442 containerd[1990]: time="2026-04-25T00:00:15.739407873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739682 containerd[1990]: time="2026-04-25T00:00:15.739653575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739742 containerd[1990]: time="2026-04-25T00:00:15.739682728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739742 containerd[1990]: time="2026-04-25T00:00:15.739702665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739742 containerd[1990]: time="2026-04-25T00:00:15.739717778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.739859 containerd[1990]: time="2026-04-25T00:00:15.739832167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.743065 containerd[1990]: time="2026-04-25T00:00:15.742804948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:00:15.745908 containerd[1990]: time="2026-04-25T00:00:15.743800043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:00:15.745908 containerd[1990]: time="2026-04-25T00:00:15.743943843Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 25 00:00:15.745908 containerd[1990]: time="2026-04-25T00:00:15.744660040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 25 00:00:15.745908 containerd[1990]: time="2026-04-25T00:00:15.745046584Z" level=info msg="metadata content store policy set" policy=shared Apr 25 00:00:15.749424 containerd[1990]: time="2026-04-25T00:00:15.749350103Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 25 00:00:15.749581 containerd[1990]: time="2026-04-25T00:00:15.749561141Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 25 00:00:15.749715 containerd[1990]: time="2026-04-25T00:00:15.749698243Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 25 00:00:15.749814 containerd[1990]: time="2026-04-25T00:00:15.749799128Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 25 00:00:15.749909 containerd[1990]: time="2026-04-25T00:00:15.749894093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 25 00:00:15.750231 containerd[1990]: time="2026-04-25T00:00:15.750209504Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 25 00:00:15.751399 containerd[1990]: time="2026-04-25T00:00:15.751365275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753224240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753254708Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753274721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753297743Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753318668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753337170Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753358006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753381038Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753401045Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753422192Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753442862Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753488747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753514592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.753880 containerd[1990]: time="2026-04-25T00:00:15.753538087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753557995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753575865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753594443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753612339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753630987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753651989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753675066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753692838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753710164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753728184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753752005Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753784885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753802572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.754444 containerd[1990]: time="2026-04-25T00:00:15.753819320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.754978941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755140844Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755166030Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755207545Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755223850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755263516Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755279075Z" level=info msg="NRI interface is disabled by configuration." Apr 25 00:00:15.755338 containerd[1990]: time="2026-04-25T00:00:15.755294317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 25 00:00:15.759290 containerd[1990]: time="2026-04-25T00:00:15.758521987Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 25 00:00:15.759290 containerd[1990]: time="2026-04-25T00:00:15.758623908Z" level=info msg="Connect containerd service" Apr 25 00:00:15.759290 containerd[1990]: time="2026-04-25T00:00:15.758696307Z" level=info msg="using legacy CRI server" Apr 25 00:00:15.759290 containerd[1990]: time="2026-04-25T00:00:15.758707399Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 25 00:00:15.759290 containerd[1990]: time="2026-04-25T00:00:15.758845624Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760198529Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760346032Z" level=info msg="Start subscribing containerd event" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760405853Z" level=info msg="Start recovering state" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760485803Z" level=info msg="Start event monitor" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760509308Z" level=info msg="Start snapshots syncer" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760521270Z" level=info msg="Start cni network conf syncer for default" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.760532573Z" level=info msg="Start streaming server" Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.763291619Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 25 00:00:15.763403 containerd[1990]: time="2026-04-25T00:00:15.763355035Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 25 00:00:15.764064 systemd[1]: Started containerd.service - containerd container runtime. Apr 25 00:00:15.765890 containerd[1990]: time="2026-04-25T00:00:15.765860831Z" level=info msg="containerd successfully booted in 0.131332s" Apr 25 00:00:16.049383 tar[1973]: linux-amd64/README.md Apr 25 00:00:16.065770 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 25 00:00:16.159116 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 25 00:00:16.184540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 25 00:00:16.192031 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 25 00:00:16.197136 systemd[1]: Started sshd@0-172.31.27.158:22-4.175.71.9:44726.service - OpenSSH per-connection server daemon (4.175.71.9:44726). Apr 25 00:00:16.202378 systemd[1]: issuegen.service: Deactivated successfully. Apr 25 00:00:16.202631 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 25 00:00:16.217716 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 25 00:00:16.249315 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 25 00:00:16.262555 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 25 00:00:16.265679 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 25 00:00:16.267250 systemd[1]: Reached target getty.target - Login Prompts. Apr 25 00:00:16.281224 systemd-networkd[1895]: eth0: Gained IPv6LL Apr 25 00:00:16.284702 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 25 00:00:16.285805 systemd[1]: Reached target network-online.target - Network is Online. Apr 25 00:00:16.292439 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 25 00:00:16.306319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:16.315186 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 25 00:00:16.371936 amazon-ssm-agent[2173]: Initializing new seelog logger Apr 25 00:00:16.373178 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Apr 25 00:00:16.373110 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 25 00:00:16.373508 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.373572 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.374142 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 processing appconfig overrides Apr 25 00:00:16.375177 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.375347 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.375418 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 processing appconfig overrides Apr 25 00:00:16.376617 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.376617 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.376617 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 processing appconfig overrides Apr 25 00:00:16.376617 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO Proxy environment variables: Apr 25 00:00:16.379190 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.381040 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 25 00:00:16.381040 amazon-ssm-agent[2173]: 2026/04/25 00:00:16 processing appconfig overrides Apr 25 00:00:16.476216 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO https_proxy: Apr 25 00:00:16.574906 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO http_proxy: Apr 25 00:00:16.673227 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO no_proxy: Apr 25 00:00:16.771389 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO Checking if agent identity type OnPrem can be assumed Apr 25 00:00:16.871148 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO Checking if agent identity type EC2 can be assumed Apr 25 00:00:16.971366 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO Agent will take identity from EC2 Apr 25 00:00:17.071068 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 25 00:00:17.084895 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 25 00:00:17.084895 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 25 00:00:17.084895 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 25 00:00:17.084895 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] Starting Core Agent Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [Registrar] Starting registrar module Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:16 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:17 INFO [EC2Identity] EC2 registration was successful. Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:17 INFO [CredentialRefresher] credentialRefresher has started Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:17 INFO [CredentialRefresher] Starting credentials refresher loop Apr 25 00:00:17.085157 amazon-ssm-agent[2173]: 2026-04-25 00:00:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 25 00:00:17.169551 amazon-ssm-agent[2173]: 2026-04-25 00:00:17 INFO [CredentialRefresher] Next credential rotation will be in 32.333327805316664 minutes Apr 25 00:00:17.257812 sshd[2163]: Accepted publickey for core from 4.175.71.9 port 44726 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:17.261826 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:17.275362 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 25 00:00:17.282631 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 25 00:00:17.287318 systemd-logind[1965]: New session 1 of user core. Apr 25 00:00:17.304036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 25 00:00:17.316390 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 25 00:00:17.322091 (systemd)[2194]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 25 00:00:17.476419 systemd[2194]: Queued start job for default target default.target. Apr 25 00:00:17.488911 systemd[2194]: Created slice app.slice - User Application Slice. Apr 25 00:00:17.488956 systemd[2194]: Reached target paths.target - Paths. Apr 25 00:00:17.488980 systemd[2194]: Reached target timers.target - Timers. Apr 25 00:00:17.493189 systemd[2194]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 25 00:00:17.508535 systemd[2194]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 25 00:00:17.508690 systemd[2194]: Reached target sockets.target - Sockets. Apr 25 00:00:17.508711 systemd[2194]: Reached target basic.target - Basic System. Apr 25 00:00:17.508767 systemd[2194]: Reached target default.target - Main User Target. Apr 25 00:00:17.508807 systemd[2194]: Startup finished in 177ms. Apr 25 00:00:17.509130 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 25 00:00:17.520264 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 25 00:00:17.700905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:17.702774 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 25 00:00:17.704001 systemd[1]: Startup finished in 710ms (kernel) + 12.865s (initrd) + 7.169s (userspace) = 20.744s. Apr 25 00:00:17.716285 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:00:18.103534 amazon-ssm-agent[2173]: 2026-04-25 00:00:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 25 00:00:18.206045 amazon-ssm-agent[2173]: 2026-04-25 00:00:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2219) started Apr 25 00:00:18.233594 systemd[1]: Started sshd@1-172.31.27.158:22-4.175.71.9:44742.service - OpenSSH per-connection server daemon (4.175.71.9:44742). Apr 25 00:00:18.304674 amazon-ssm-agent[2173]: 2026-04-25 00:00:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 25 00:00:18.596744 kubelet[2208]: E0425 00:00:18.596634 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:00:18.599310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:00:18.599476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:00:18.600066 systemd[1]: kubelet.service: Consumed 1.154s CPU time. Apr 25 00:00:18.695746 ntpd[1951]: Listen normally on 7 eth0 [fe80::4e6:e6ff:fe52:7d83%2]:123 Apr 25 00:00:18.696370 ntpd[1951]: 25 Apr 00:00:18 ntpd[1951]: Listen normally on 7 eth0 [fe80::4e6:e6ff:fe52:7d83%2]:123 Apr 25 00:00:19.233061 sshd[2227]: Accepted publickey for core from 4.175.71.9 port 44742 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:19.236443 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:19.245119 systemd-logind[1965]: New session 2 of user core. Apr 25 00:00:19.260311 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 25 00:00:19.913243 sshd[2227]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:19.916832 systemd[1]: sshd@1-172.31.27.158:22-4.175.71.9:44742.service: Deactivated successfully. Apr 25 00:00:19.918810 systemd[1]: session-2.scope: Deactivated successfully. Apr 25 00:00:19.920553 systemd-logind[1965]: Session 2 logged out. Waiting for processes to exit. Apr 25 00:00:19.921874 systemd-logind[1965]: Removed session 2. Apr 25 00:00:20.092659 systemd[1]: Started sshd@2-172.31.27.158:22-4.175.71.9:44746.service - OpenSSH per-connection server daemon (4.175.71.9:44746). Apr 25 00:00:21.103026 sshd[2240]: Accepted publickey for core from 4.175.71.9 port 44746 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:21.103699 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:21.109762 systemd-logind[1965]: New session 3 of user core. Apr 25 00:00:21.115322 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 25 00:00:22.486891 systemd-resolved[1896]: Clock change detected. Flushing caches. Apr 25 00:00:22.588727 sshd[2240]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:22.592332 systemd[1]: sshd@2-172.31.27.158:22-4.175.71.9:44746.service: Deactivated successfully. Apr 25 00:00:22.594426 systemd[1]: session-3.scope: Deactivated successfully. Apr 25 00:00:22.595960 systemd-logind[1965]: Session 3 logged out. Waiting for processes to exit. Apr 25 00:00:22.597312 systemd-logind[1965]: Removed session 3. Apr 25 00:00:22.755224 systemd[1]: Started sshd@3-172.31.27.158:22-4.175.71.9:44750.service - OpenSSH per-connection server daemon (4.175.71.9:44750). Apr 25 00:00:23.729419 sshd[2247]: Accepted publickey for core from 4.175.71.9 port 44750 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:23.731295 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:23.736051 systemd-logind[1965]: New session 4 of user core. Apr 25 00:00:23.742093 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 25 00:00:24.409087 sshd[2247]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:24.413679 systemd-logind[1965]: Session 4 logged out. Waiting for processes to exit. Apr 25 00:00:24.414374 systemd[1]: sshd@3-172.31.27.158:22-4.175.71.9:44750.service: Deactivated successfully. Apr 25 00:00:24.416416 systemd[1]: session-4.scope: Deactivated successfully. Apr 25 00:00:24.417484 systemd-logind[1965]: Removed session 4. Apr 25 00:00:24.576180 systemd[1]: Started sshd@4-172.31.27.158:22-4.175.71.9:44754.service - OpenSSH per-connection server daemon (4.175.71.9:44754). Apr 25 00:00:25.519920 sshd[2254]: Accepted publickey for core from 4.175.71.9 port 44754 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:25.521373 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:25.526345 systemd-logind[1965]: New session 5 of user core. Apr 25 00:00:25.534081 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 25 00:00:26.039181 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 25 00:00:26.039587 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:00:26.055778 sudo[2257]: pam_unix(sudo:session): session closed for user root Apr 25 00:00:26.210789 sshd[2254]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:26.214706 systemd[1]: sshd@4-172.31.27.158:22-4.175.71.9:44754.service: Deactivated successfully. Apr 25 00:00:26.217019 systemd[1]: session-5.scope: Deactivated successfully. Apr 25 00:00:26.218562 systemd-logind[1965]: Session 5 logged out. Waiting for processes to exit. Apr 25 00:00:26.220341 systemd-logind[1965]: Removed session 5. Apr 25 00:00:26.386237 systemd[1]: Started sshd@5-172.31.27.158:22-4.175.71.9:51392.service - OpenSSH per-connection server daemon (4.175.71.9:51392). Apr 25 00:00:27.373114 sshd[2262]: Accepted publickey for core from 4.175.71.9 port 51392 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:27.374933 sshd[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:27.380362 systemd-logind[1965]: New session 6 of user core. Apr 25 00:00:27.390087 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 25 00:00:27.894323 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 25 00:00:27.894786 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:00:27.899062 sudo[2266]: pam_unix(sudo:session): session closed for user root Apr 25 00:00:27.904849 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 25 00:00:27.905289 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:00:27.926118 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 25 00:00:27.929138 auditctl[2269]: No rules Apr 25 00:00:27.929600 systemd[1]: audit-rules.service: Deactivated successfully. Apr 25 00:00:27.929905 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 25 00:00:27.937482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 00:00:27.965885 augenrules[2287]: No rules Apr 25 00:00:27.967777 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 00:00:27.968996 sudo[2265]: pam_unix(sudo:session): session closed for user root Apr 25 00:00:28.128483 sshd[2262]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:28.132994 systemd[1]: sshd@5-172.31.27.158:22-4.175.71.9:51392.service: Deactivated successfully. Apr 25 00:00:28.135241 systemd[1]: session-6.scope: Deactivated successfully. Apr 25 00:00:28.136062 systemd-logind[1965]: Session 6 logged out. Waiting for processes to exit. Apr 25 00:00:28.137200 systemd-logind[1965]: Removed session 6. Apr 25 00:00:28.315338 systemd[1]: Started sshd@6-172.31.27.158:22-4.175.71.9:51398.service - OpenSSH per-connection server daemon (4.175.71.9:51398). Apr 25 00:00:29.319723 sshd[2295]: Accepted publickey for core from 4.175.71.9 port 51398 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:29.321312 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:29.327576 systemd-logind[1965]: New session 7 of user core. Apr 25 00:00:29.333112 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 25 00:00:29.482351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 25 00:00:29.488135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:29.712600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:29.722376 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:00:29.772154 kubelet[2306]: E0425 00:00:29.772095 2306 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:00:29.778152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:00:29.778362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:00:29.855727 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 25 00:00:29.856158 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:00:30.283215 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 25 00:00:30.285385 (dockerd)[2329]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 25 00:00:30.671902 dockerd[2329]: time="2026-04-25T00:00:30.671765186Z" level=info msg="Starting up" Apr 25 00:00:30.812365 dockerd[2329]: time="2026-04-25T00:00:30.812292110Z" level=info msg="Loading containers: start." Apr 25 00:00:30.949831 kernel: Initializing XFRM netlink socket Apr 25 00:00:30.979051 (udev-worker)[2354]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:00:31.047072 systemd-networkd[1895]: docker0: Link UP Apr 25 00:00:31.069700 dockerd[2329]: time="2026-04-25T00:00:31.069647817Z" level=info msg="Loading containers: done." Apr 25 00:00:31.091036 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4138536314-merged.mount: Deactivated successfully. Apr 25 00:00:31.096016 dockerd[2329]: time="2026-04-25T00:00:31.095960861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 25 00:00:31.096216 dockerd[2329]: time="2026-04-25T00:00:31.096096250Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 25 00:00:31.096267 dockerd[2329]: time="2026-04-25T00:00:31.096248212Z" level=info msg="Daemon has completed initialization" Apr 25 00:00:31.135944 dockerd[2329]: time="2026-04-25T00:00:31.135873869Z" level=info msg="API listen on /run/docker.sock" Apr 25 00:00:31.136285 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 25 00:00:31.861152 containerd[1990]: time="2026-04-25T00:00:31.860785387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 25 00:00:32.431776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314665418.mount: Deactivated successfully. Apr 25 00:00:34.231068 containerd[1990]: time="2026-04-25T00:00:34.230999709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:34.232897 containerd[1990]: time="2026-04-25T00:00:34.232613505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 25 00:00:34.234518 containerd[1990]: time="2026-04-25T00:00:34.234471026Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:34.237901 containerd[1990]: time="2026-04-25T00:00:34.237845356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:34.239500 containerd[1990]: time="2026-04-25T00:00:34.239297373Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.378445105s" Apr 25 00:00:34.239500 containerd[1990]: time="2026-04-25T00:00:34.239345673Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 25 00:00:34.240513 containerd[1990]: time="2026-04-25T00:00:34.240463033Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 25 00:00:36.112218 containerd[1990]: time="2026-04-25T00:00:36.112153014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:36.113645 containerd[1990]: time="2026-04-25T00:00:36.113592176Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 25 00:00:36.114425 containerd[1990]: time="2026-04-25T00:00:36.114371222Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:36.117841 containerd[1990]: time="2026-04-25T00:00:36.117592981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:36.119419 containerd[1990]: time="2026-04-25T00:00:36.119370404Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.878852879s" Apr 25 00:00:36.119419 containerd[1990]: time="2026-04-25T00:00:36.119418319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 25 00:00:36.120444 containerd[1990]: time="2026-04-25T00:00:36.120405225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 25 00:00:37.626176 containerd[1990]: time="2026-04-25T00:00:37.626120162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:37.627644 containerd[1990]: time="2026-04-25T00:00:37.627587393Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 25 00:00:37.629824 containerd[1990]: time="2026-04-25T00:00:37.628345750Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:37.631618 containerd[1990]: time="2026-04-25T00:00:37.631577797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:37.632906 containerd[1990]: time="2026-04-25T00:00:37.632865422Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.51240969s" Apr 25 00:00:37.633002 containerd[1990]: time="2026-04-25T00:00:37.632912471Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 25 00:00:37.633507 containerd[1990]: time="2026-04-25T00:00:37.633478420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 25 00:00:38.755135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061132507.mount: Deactivated successfully. Apr 25 00:00:39.425187 containerd[1990]: time="2026-04-25T00:00:39.425129230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:39.426284 containerd[1990]: time="2026-04-25T00:00:39.426119260Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 25 00:00:39.427508 containerd[1990]: time="2026-04-25T00:00:39.427197550Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:39.429606 containerd[1990]: time="2026-04-25T00:00:39.429570225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:39.430316 containerd[1990]: time="2026-04-25T00:00:39.430277592Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.796754181s" Apr 25 00:00:39.430398 containerd[1990]: time="2026-04-25T00:00:39.430323220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 25 00:00:39.431084 containerd[1990]: time="2026-04-25T00:00:39.431056969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 25 00:00:39.903158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 25 00:00:39.911155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:39.927567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316792591.mount: Deactivated successfully. Apr 25 00:00:40.204624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:40.206754 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:00:40.271075 kubelet[2564]: E0425 00:00:40.271030 2564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:00:40.274772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:00:40.275008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:00:41.378201 containerd[1990]: time="2026-04-25T00:00:41.378135907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:41.380429 containerd[1990]: time="2026-04-25T00:00:41.380182360Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 25 00:00:41.383595 containerd[1990]: time="2026-04-25T00:00:41.383541595Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:41.388840 containerd[1990]: time="2026-04-25T00:00:41.387887366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:41.390356 containerd[1990]: time="2026-04-25T00:00:41.389385416Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.958289506s" Apr 25 00:00:41.390356 containerd[1990]: time="2026-04-25T00:00:41.389434095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 25 00:00:41.390356 containerd[1990]: time="2026-04-25T00:00:41.390201021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 25 00:00:41.899442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175998401.mount: Deactivated successfully. Apr 25 00:00:41.912919 containerd[1990]: time="2026-04-25T00:00:41.912858204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:41.914887 containerd[1990]: time="2026-04-25T00:00:41.914762957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 25 00:00:41.918828 containerd[1990]: time="2026-04-25T00:00:41.917429948Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:41.921673 containerd[1990]: time="2026-04-25T00:00:41.921623706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:41.923001 containerd[1990]: time="2026-04-25T00:00:41.922954795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 532.720757ms" Apr 25 00:00:41.923186 containerd[1990]: time="2026-04-25T00:00:41.923005603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 25 00:00:41.924169 containerd[1990]: time="2026-04-25T00:00:41.924112216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 25 00:00:42.481548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613974270.mount: Deactivated successfully. Apr 25 00:00:43.991631 containerd[1990]: time="2026-04-25T00:00:43.991557900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:43.993728 containerd[1990]: time="2026-04-25T00:00:43.993379604Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 25 00:00:43.997478 containerd[1990]: time="2026-04-25T00:00:43.996157974Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:44.008109 containerd[1990]: time="2026-04-25T00:00:44.005671761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:44.016883 containerd[1990]: time="2026-04-25T00:00:44.015072488Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.090901432s" Apr 25 00:00:44.016883 containerd[1990]: time="2026-04-25T00:00:44.015126807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 25 00:00:46.238191 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 25 00:00:47.227313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:47.236520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:47.276924 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-7.scope)... Apr 25 00:00:47.276947 systemd[1]: Reloading... Apr 25 00:00:47.414982 zram_generator::config[2748]: No configuration found. Apr 25 00:00:47.590063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:00:47.677462 systemd[1]: Reloading finished in 399 ms. Apr 25 00:00:47.741270 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 25 00:00:47.741409 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 25 00:00:47.741756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:47.745098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:48.001053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:48.014381 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 00:00:48.065257 kubelet[2812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:00:48.065257 kubelet[2812]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 25 00:00:48.065257 kubelet[2812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:00:48.069206 kubelet[2812]: I0425 00:00:48.069136 2812 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 25 00:00:48.956050 kubelet[2812]: I0425 00:00:48.955832 2812 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 25 00:00:48.956872 kubelet[2812]: I0425 00:00:48.956242 2812 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 00:00:48.956872 kubelet[2812]: I0425 00:00:48.956515 2812 server.go:956] "Client rotation is on, will bootstrap in background" Apr 25 00:00:49.007906 kubelet[2812]: I0425 00:00:49.007867 2812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 00:00:49.009669 kubelet[2812]: E0425 00:00:49.008969 2812 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 25 00:00:49.017064 kubelet[2812]: E0425 00:00:49.017015 2812 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 00:00:49.017064 kubelet[2812]: I0425 00:00:49.017064 2812 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 25 00:00:49.030705 kubelet[2812]: I0425 00:00:49.030660 2812 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 25 00:00:49.031778 kubelet[2812]: I0425 00:00:49.031723 2812 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 00:00:49.036943 kubelet[2812]: I0425 00:00:49.031775 2812 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-158","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 25 00:00:49.037158 kubelet[2812]: I0425 00:00:49.036966 2812 topology_manager.go:138] "Creating topology manager with none policy" Apr 25 00:00:49.037158 kubelet[2812]: I0425 00:00:49.036985 2812 container_manager_linux.go:303] "Creating device plugin manager" Apr 25 00:00:49.037246 kubelet[2812]: I0425 00:00:49.037174 2812 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:00:49.044423 kubelet[2812]: I0425 00:00:49.044361 2812 kubelet.go:480] "Attempting to sync node with API server" Apr 25 00:00:49.044423 kubelet[2812]: I0425 00:00:49.044420 2812 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 00:00:49.045625 kubelet[2812]: I0425 00:00:49.044461 2812 kubelet.go:386] "Adding apiserver pod source" Apr 25 00:00:49.045625 kubelet[2812]: I0425 00:00:49.044483 2812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 00:00:49.057721 kubelet[2812]: E0425 00:00:49.057675 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-158&limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 25 00:00:49.057897 kubelet[2812]: E0425 00:00:49.057820 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 25 00:00:49.059144 kubelet[2812]: I0425 00:00:49.059117 2812 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 00:00:49.059734 kubelet[2812]: I0425 00:00:49.059713 2812 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 00:00:49.060684 kubelet[2812]: W0425 00:00:49.060651 2812 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 25 00:00:49.070629 kubelet[2812]: I0425 00:00:49.070590 2812 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 25 00:00:49.071058 kubelet[2812]: I0425 00:00:49.070656 2812 server.go:1289] "Started kubelet" Apr 25 00:00:49.071058 kubelet[2812]: I0425 00:00:49.070922 2812 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 00:00:49.071827 kubelet[2812]: I0425 00:00:49.071784 2812 server.go:317] "Adding debug handlers to kubelet server" Apr 25 00:00:49.075736 kubelet[2812]: I0425 00:00:49.075645 2812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 00:00:49.076459 kubelet[2812]: I0425 00:00:49.076157 2812 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 00:00:49.078841 kubelet[2812]: I0425 00:00:49.077671 2812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 25 00:00:49.078841 kubelet[2812]: E0425 00:00:49.076331 2812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.158:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.158:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-158.18a970860ffe3583 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-158,UID:ip-172-31-27-158,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-158,},FirstTimestamp:2026-04-25 00:00:49.070617987 +0000 UTC m=+1.050855474,LastTimestamp:2026-04-25 00:00:49.070617987 +0000 UTC m=+1.050855474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-158,}" Apr 25 00:00:49.078841 kubelet[2812]: I0425 00:00:49.078626 2812 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 00:00:49.081937 kubelet[2812]: E0425 00:00:49.081910 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:49.082068 kubelet[2812]: I0425 00:00:49.082059 2812 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 25 00:00:49.082369 kubelet[2812]: I0425 00:00:49.082353 2812 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 25 00:00:49.082499 kubelet[2812]: I0425 00:00:49.082489 2812 reconciler.go:26] "Reconciler: start to sync state" Apr 25 00:00:49.088843 kubelet[2812]: I0425 00:00:49.087500 2812 factory.go:223] Registration of the systemd container factory successfully Apr 25 00:00:49.088843 kubelet[2812]: I0425 00:00:49.087604 2812 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 00:00:49.089866 kubelet[2812]: E0425 00:00:49.089835 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 25 00:00:49.092205 kubelet[2812]: I0425 00:00:49.092181 2812 factory.go:223] Registration of the containerd container factory successfully Apr 25 00:00:49.108971 kubelet[2812]: I0425 00:00:49.107514 2812 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 25 00:00:49.109350 kubelet[2812]: I0425 00:00:49.109173 2812 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 25 00:00:49.109350 kubelet[2812]: I0425 00:00:49.109199 2812 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 25 00:00:49.109350 kubelet[2812]: I0425 00:00:49.109227 2812 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 00:00:49.109350 kubelet[2812]: I0425 00:00:49.109237 2812 kubelet.go:2436] "Starting kubelet main sync loop" Apr 25 00:00:49.109350 kubelet[2812]: E0425 00:00:49.109287 2812 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 00:00:49.117453 kubelet[2812]: E0425 00:00:49.117426 2812 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 00:00:49.117925 kubelet[2812]: E0425 00:00:49.117604 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-158?timeout=10s\": dial tcp 172.31.27.158:6443: connect: connection refused" interval="200ms" Apr 25 00:00:49.119891 kubelet[2812]: E0425 00:00:49.118766 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 25 00:00:49.129284 kubelet[2812]: I0425 00:00:49.129258 2812 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 25 00:00:49.129284 kubelet[2812]: I0425 00:00:49.129280 2812 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 25 00:00:49.129481 kubelet[2812]: I0425 00:00:49.129303 2812 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:00:49.131138 kubelet[2812]: I0425 00:00:49.131107 2812 policy_none.go:49] "None policy: Start" Apr 25 00:00:49.131138 kubelet[2812]: I0425 00:00:49.131137 2812 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 25 00:00:49.131298 kubelet[2812]: I0425 00:00:49.131152 2812 state_mem.go:35] "Initializing new in-memory state store" Apr 25 00:00:49.137725 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 25 00:00:49.149930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 25 00:00:49.154669 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 25 00:00:49.162865 kubelet[2812]: E0425 00:00:49.162794 2812 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 00:00:49.163103 kubelet[2812]: I0425 00:00:49.163081 2812 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 25 00:00:49.163172 kubelet[2812]: I0425 00:00:49.163106 2812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 00:00:49.165532 kubelet[2812]: I0425 00:00:49.163832 2812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 25 00:00:49.167558 kubelet[2812]: E0425 00:00:49.167158 2812 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 00:00:49.167558 kubelet[2812]: E0425 00:00:49.167219 2812 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-158\" not found" Apr 25 00:00:49.225126 systemd[1]: Created slice kubepods-burstable-podb20685a2a1045c92882a93feb584e3fc.slice - libcontainer container kubepods-burstable-podb20685a2a1045c92882a93feb584e3fc.slice. Apr 25 00:00:49.236089 kubelet[2812]: E0425 00:00:49.235870 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:49.239435 systemd[1]: Created slice kubepods-burstable-pod678a2e9f4fa63c9a377f998fd3060081.slice - libcontainer container kubepods-burstable-pod678a2e9f4fa63c9a377f998fd3060081.slice. Apr 25 00:00:49.251762 kubelet[2812]: E0425 00:00:49.251722 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:49.255259 systemd[1]: Created slice kubepods-burstable-pod61bbc1e688cbe3eb89ffb849e8552ba8.slice - libcontainer container kubepods-burstable-pod61bbc1e688cbe3eb89ffb849e8552ba8.slice. Apr 25 00:00:49.262755 kubelet[2812]: E0425 00:00:49.262720 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:49.265254 kubelet[2812]: I0425 00:00:49.265223 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-158" Apr 25 00:00:49.265650 kubelet[2812]: E0425 00:00:49.265619 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.158:6443/api/v1/nodes\": dial tcp 172.31.27.158:6443: connect: connection refused" node="ip-172-31-27-158" Apr 25 00:00:49.284365 kubelet[2812]: I0425 00:00:49.284056 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20685a2a1045c92882a93feb584e3fc-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-158\" (UID: \"b20685a2a1045c92882a93feb584e3fc\") " pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:49.284365 kubelet[2812]: I0425 00:00:49.284130 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20685a2a1045c92882a93feb584e3fc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-158\" (UID: \"b20685a2a1045c92882a93feb584e3fc\") " pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:49.284365 kubelet[2812]: I0425 00:00:49.284168 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:49.284365 kubelet[2812]: I0425 00:00:49.284184 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:49.284365 kubelet[2812]: I0425 00:00:49.284232 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:49.284619 kubelet[2812]: I0425 00:00:49.284258 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:49.284619 kubelet[2812]: I0425 00:00:49.284275 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20685a2a1045c92882a93feb584e3fc-ca-certs\") pod \"kube-apiserver-ip-172-31-27-158\" (UID: \"b20685a2a1045c92882a93feb584e3fc\") " pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:49.284619 kubelet[2812]: I0425 00:00:49.284302 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:49.284619 kubelet[2812]: I0425 00:00:49.284327 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61bbc1e688cbe3eb89ffb849e8552ba8-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-158\" (UID: \"61bbc1e688cbe3eb89ffb849e8552ba8\") " pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:49.319103 kubelet[2812]: E0425 00:00:49.319052 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-158?timeout=10s\": dial tcp 172.31.27.158:6443: connect: connection refused" interval="400ms" Apr 25 00:00:49.467982 kubelet[2812]: I0425 00:00:49.467946 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-158" Apr 25 00:00:49.468395 kubelet[2812]: E0425 00:00:49.468352 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.158:6443/api/v1/nodes\": dial tcp 172.31.27.158:6443: connect: connection refused" node="ip-172-31-27-158" Apr 25 00:00:49.540288 containerd[1990]: time="2026-04-25T00:00:49.540217020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-158,Uid:b20685a2a1045c92882a93feb584e3fc,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:49.559690 containerd[1990]: time="2026-04-25T00:00:49.559634776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-158,Uid:678a2e9f4fa63c9a377f998fd3060081,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:49.565228 containerd[1990]: time="2026-04-25T00:00:49.565186228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-158,Uid:61bbc1e688cbe3eb89ffb849e8552ba8,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:49.720593 kubelet[2812]: E0425 00:00:49.720530 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-158?timeout=10s\": dial tcp 172.31.27.158:6443: connect: connection refused" interval="800ms" Apr 25 00:00:49.870179 kubelet[2812]: I0425 00:00:49.870073 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-158" Apr 25 00:00:49.871079 kubelet[2812]: E0425 00:00:49.870997 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.158:6443/api/v1/nodes\": dial tcp 172.31.27.158:6443: connect: connection refused" node="ip-172-31-27-158" Apr 25 00:00:49.901616 kubelet[2812]: E0425 00:00:49.901552 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 25 00:00:50.000524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034436446.mount: Deactivated successfully. Apr 25 00:00:50.014974 containerd[1990]: time="2026-04-25T00:00:50.014910211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:00:50.016523 containerd[1990]: time="2026-04-25T00:00:50.016471064Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:00:50.017788 containerd[1990]: time="2026-04-25T00:00:50.017724921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 25 00:00:50.019025 containerd[1990]: time="2026-04-25T00:00:50.018918596Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:00:50.021837 containerd[1990]: time="2026-04-25T00:00:50.020454117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 00:00:50.021837 containerd[1990]: time="2026-04-25T00:00:50.020570758Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:00:50.021837 containerd[1990]: time="2026-04-25T00:00:50.021711763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 00:00:50.024216 containerd[1990]: time="2026-04-25T00:00:50.024171099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:00:50.026625 containerd[1990]: time="2026-04-25T00:00:50.026578177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.646121ms" Apr 25 00:00:50.027567 containerd[1990]: time="2026-04-25T00:00:50.027528789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.193564ms" Apr 25 00:00:50.030374 containerd[1990]: time="2026-04-25T00:00:50.030334665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.063701ms" Apr 25 00:00:50.105657 kubelet[2812]: E0425 00:00:50.105589 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 25 00:00:50.264785 containerd[1990]: time="2026-04-25T00:00:50.261238693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:50.264785 containerd[1990]: time="2026-04-25T00:00:50.261554508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:50.264785 containerd[1990]: time="2026-04-25T00:00:50.261595135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:50.264785 containerd[1990]: time="2026-04-25T00:00:50.261705813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:50.270594 containerd[1990]: time="2026-04-25T00:00:50.269009364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:50.270594 containerd[1990]: time="2026-04-25T00:00:50.269091600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:50.270594 containerd[1990]: time="2026-04-25T00:00:50.269109488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:50.270594 containerd[1990]: time="2026-04-25T00:00:50.269236992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:50.276508 containerd[1990]: time="2026-04-25T00:00:50.275061277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:50.276508 containerd[1990]: time="2026-04-25T00:00:50.275139676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:50.276508 containerd[1990]: time="2026-04-25T00:00:50.275163245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:50.276508 containerd[1990]: time="2026-04-25T00:00:50.275270472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:50.307087 systemd[1]: Started cri-containerd-7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7.scope - libcontainer container 7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7. Apr 25 00:00:50.323099 systemd[1]: Started cri-containerd-1396be56fa60d75d6a9bc1ccf20ee9e544c0c6eb6bedc8fb1eba7d71262f6ab3.scope - libcontainer container 1396be56fa60d75d6a9bc1ccf20ee9e544c0c6eb6bedc8fb1eba7d71262f6ab3. Apr 25 00:00:50.339125 systemd[1]: Started cri-containerd-65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0.scope - libcontainer container 65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0. Apr 25 00:00:50.411477 kubelet[2812]: E0425 00:00:50.411260 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 25 00:00:50.434614 containerd[1990]: time="2026-04-25T00:00:50.433977916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-158,Uid:61bbc1e688cbe3eb89ffb849e8552ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7\"" Apr 25 00:00:50.451676 containerd[1990]: time="2026-04-25T00:00:50.451485695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-158,Uid:b20685a2a1045c92882a93feb584e3fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1396be56fa60d75d6a9bc1ccf20ee9e544c0c6eb6bedc8fb1eba7d71262f6ab3\"" Apr 25 00:00:50.460269 containerd[1990]: time="2026-04-25T00:00:50.459881198Z" level=info msg="CreateContainer within sandbox \"1396be56fa60d75d6a9bc1ccf20ee9e544c0c6eb6bedc8fb1eba7d71262f6ab3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 25 00:00:50.461176 containerd[1990]: time="2026-04-25T00:00:50.461137557Z" level=info msg="CreateContainer within sandbox \"7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 25 00:00:50.477249 containerd[1990]: time="2026-04-25T00:00:50.477098836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-158,Uid:678a2e9f4fa63c9a377f998fd3060081,Namespace:kube-system,Attempt:0,} returns sandbox id \"65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0\"" Apr 25 00:00:50.483204 containerd[1990]: time="2026-04-25T00:00:50.482996103Z" level=info msg="CreateContainer within sandbox \"7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2\"" Apr 25 00:00:50.484706 containerd[1990]: time="2026-04-25T00:00:50.484115983Z" level=info msg="StartContainer for \"b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2\"" Apr 25 00:00:50.485171 containerd[1990]: time="2026-04-25T00:00:50.485120684Z" level=info msg="CreateContainer within sandbox \"65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 25 00:00:50.497579 containerd[1990]: time="2026-04-25T00:00:50.497409871Z" level=info msg="CreateContainer within sandbox \"1396be56fa60d75d6a9bc1ccf20ee9e544c0c6eb6bedc8fb1eba7d71262f6ab3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22e475ec6b8c4fb96a2c00e0aa43212a8a629668261f6bc05f223ee8078e81eb\"" Apr 25 00:00:50.498913 containerd[1990]: time="2026-04-25T00:00:50.498784963Z" level=info msg="StartContainer for \"22e475ec6b8c4fb96a2c00e0aa43212a8a629668261f6bc05f223ee8078e81eb\"" Apr 25 00:00:50.502341 containerd[1990]: time="2026-04-25T00:00:50.502201983Z" level=info msg="CreateContainer within sandbox \"65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451\"" Apr 25 00:00:50.503852 containerd[1990]: time="2026-04-25T00:00:50.502934813Z" level=info msg="StartContainer for \"859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451\"" Apr 25 00:00:50.522160 kubelet[2812]: E0425 00:00:50.522027 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-158?timeout=10s\": dial tcp 172.31.27.158:6443: connect: connection refused" interval="1.6s" Apr 25 00:00:50.551062 systemd[1]: Started cri-containerd-b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2.scope - libcontainer container b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2. Apr 25 00:00:50.562459 kubelet[2812]: E0425 00:00:50.562348 2812 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-158&limit=500&resourceVersion=0\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 25 00:00:50.568718 systemd[1]: Started cri-containerd-22e475ec6b8c4fb96a2c00e0aa43212a8a629668261f6bc05f223ee8078e81eb.scope - libcontainer container 22e475ec6b8c4fb96a2c00e0aa43212a8a629668261f6bc05f223ee8078e81eb. Apr 25 00:00:50.586243 systemd[1]: Started cri-containerd-859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451.scope - libcontainer container 859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451. Apr 25 00:00:50.660689 containerd[1990]: time="2026-04-25T00:00:50.660639864Z" level=info msg="StartContainer for \"b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2\" returns successfully" Apr 25 00:00:50.675090 kubelet[2812]: I0425 00:00:50.674986 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-158" Apr 25 00:00:50.677030 kubelet[2812]: E0425 00:00:50.676941 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.158:6443/api/v1/nodes\": dial tcp 172.31.27.158:6443: connect: connection refused" node="ip-172-31-27-158" Apr 25 00:00:50.696261 containerd[1990]: time="2026-04-25T00:00:50.695848541Z" level=info msg="StartContainer for \"859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451\" returns successfully" Apr 25 00:00:50.696577 containerd[1990]: time="2026-04-25T00:00:50.696538588Z" level=info msg="StartContainer for \"22e475ec6b8c4fb96a2c00e0aa43212a8a629668261f6bc05f223ee8078e81eb\" returns successfully" Apr 25 00:00:51.071961 kubelet[2812]: E0425 00:00:51.071916 2812 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.158:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 25 00:00:51.137650 kubelet[2812]: E0425 00:00:51.137610 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:51.138495 kubelet[2812]: E0425 00:00:51.138456 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:51.141106 kubelet[2812]: E0425 00:00:51.141079 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:52.143363 kubelet[2812]: E0425 00:00:52.143121 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:52.144346 kubelet[2812]: E0425 00:00:52.144160 2812 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:52.283925 kubelet[2812]: I0425 00:00:52.281030 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-158" Apr 25 00:00:52.690590 kubelet[2812]: E0425 00:00:52.690541 2812 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-158\" not found" node="ip-172-31-27-158" Apr 25 00:00:52.736007 kubelet[2812]: E0425 00:00:52.735893 2812 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-158.18a970860ffe3583 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-158,UID:ip-172-31-27-158,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-158,},FirstTimestamp:2026-04-25 00:00:49.070617987 +0000 UTC m=+1.050855474,LastTimestamp:2026-04-25 00:00:49.070617987 +0000 UTC m=+1.050855474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-158,}" Apr 25 00:00:52.767732 kubelet[2812]: I0425 00:00:52.767691 2812 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-158" Apr 25 00:00:52.767732 kubelet[2812]: E0425 00:00:52.767740 2812 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-27-158\": node \"ip-172-31-27-158\" not found" Apr 25 00:00:52.783832 kubelet[2812]: E0425 00:00:52.783751 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:52.801354 kubelet[2812]: E0425 00:00:52.801236 2812 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-158.18a9708612c80fe5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-158,UID:ip-172-31-27-158,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-27-158,},FirstTimestamp:2026-04-25 00:00:49.117401061 +0000 UTC m=+1.097638566,LastTimestamp:2026-04-25 00:00:49.117401061 +0000 UTC m=+1.097638566,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-158,}" Apr 25 00:00:52.884573 kubelet[2812]: E0425 00:00:52.884521 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:52.987845 kubelet[2812]: E0425 00:00:52.987766 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:53.088429 kubelet[2812]: E0425 00:00:53.088375 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:53.189638 kubelet[2812]: E0425 00:00:53.189570 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:53.290064 kubelet[2812]: E0425 00:00:53.289904 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:53.391093 kubelet[2812]: E0425 00:00:53.391021 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-158\" not found" Apr 25 00:00:53.495138 kubelet[2812]: I0425 00:00:53.495102 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:53.505967 kubelet[2812]: E0425 00:00:53.505912 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-158\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:53.505967 kubelet[2812]: I0425 00:00:53.505964 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:53.508457 kubelet[2812]: E0425 00:00:53.508422 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-158\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:53.508457 kubelet[2812]: I0425 00:00:53.508453 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:53.510867 kubelet[2812]: E0425 00:00:53.510816 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-158\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:53.608061 kubelet[2812]: I0425 00:00:53.607918 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:54.052597 kubelet[2812]: I0425 00:00:54.052539 2812 apiserver.go:52] "Watching apiserver" Apr 25 00:00:54.082642 kubelet[2812]: I0425 00:00:54.082569 2812 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 25 00:00:54.988201 systemd[1]: Reloading requested from client PID 3101 ('systemctl') (unit session-7.scope)... Apr 25 00:00:54.988225 systemd[1]: Reloading... Apr 25 00:00:55.118827 zram_generator::config[3141]: No configuration found. Apr 25 00:00:55.273084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:00:55.377387 systemd[1]: Reloading finished in 388 ms. Apr 25 00:00:55.432164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:55.441488 systemd[1]: kubelet.service: Deactivated successfully. Apr 25 00:00:55.441889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:55.441972 systemd[1]: kubelet.service: Consumed 1.496s CPU time, 129.0M memory peak, 0B memory swap peak. Apr 25 00:00:55.449280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:00:55.692850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:00:55.704409 (kubelet)[3201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 00:00:55.794874 kubelet[3201]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:00:55.794874 kubelet[3201]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 25 00:00:55.794874 kubelet[3201]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:00:55.794874 kubelet[3201]: I0425 00:00:55.794025 3201 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 25 00:00:55.803383 kubelet[3201]: I0425 00:00:55.803333 3201 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 25 00:00:55.803383 kubelet[3201]: I0425 00:00:55.803371 3201 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 00:00:55.803722 kubelet[3201]: I0425 00:00:55.803697 3201 server.go:956] "Client rotation is on, will bootstrap in background" Apr 25 00:00:55.805025 kubelet[3201]: I0425 00:00:55.804996 3201 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 25 00:00:55.810578 kubelet[3201]: I0425 00:00:55.810361 3201 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 00:00:55.819619 kubelet[3201]: E0425 00:00:55.819575 3201 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 00:00:55.819619 kubelet[3201]: I0425 00:00:55.819613 3201 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 25 00:00:55.821745 kubelet[3201]: I0425 00:00:55.821708 3201 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 25 00:00:55.822110 kubelet[3201]: I0425 00:00:55.822067 3201 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 00:00:55.822301 kubelet[3201]: I0425 00:00:55.822105 3201 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-158","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 25 00:00:55.822600 kubelet[3201]: I0425 00:00:55.822305 3201 topology_manager.go:138] "Creating topology manager with none policy" Apr 25 00:00:55.822600 kubelet[3201]: I0425 00:00:55.822321 3201 container_manager_linux.go:303] "Creating device plugin manager" Apr 25 00:00:55.822600 kubelet[3201]: I0425 00:00:55.822380 3201 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:00:55.822848 kubelet[3201]: I0425 00:00:55.822828 3201 kubelet.go:480] "Attempting to sync node with API server" Apr 25 00:00:55.822923 kubelet[3201]: I0425 00:00:55.822863 3201 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 00:00:55.822923 kubelet[3201]: I0425 00:00:55.822897 3201 kubelet.go:386] "Adding apiserver pod source" Apr 25 00:00:55.822923 kubelet[3201]: I0425 00:00:55.822921 3201 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 00:00:55.826837 kubelet[3201]: I0425 00:00:55.826401 3201 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 00:00:55.827193 kubelet[3201]: I0425 00:00:55.827174 3201 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 00:00:55.843236 kubelet[3201]: I0425 00:00:55.843193 3201 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 25 00:00:55.843370 kubelet[3201]: I0425 00:00:55.843251 3201 server.go:1289] "Started kubelet" Apr 25 00:00:55.845951 kubelet[3201]: I0425 00:00:55.844831 3201 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 25 00:00:55.852458 kubelet[3201]: I0425 00:00:55.851657 3201 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 00:00:55.853310 kubelet[3201]: I0425 00:00:55.853287 3201 server.go:317] "Adding debug handlers to kubelet server" Apr 25 00:00:55.860126 kubelet[3201]: I0425 00:00:55.860102 3201 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 25 00:00:55.865589 kubelet[3201]: I0425 00:00:55.861147 3201 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 00:00:55.865589 kubelet[3201]: I0425 00:00:55.861292 3201 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 25 00:00:55.865589 kubelet[3201]: I0425 00:00:55.861412 3201 reconciler.go:26] "Reconciler: start to sync state" Apr 25 00:00:55.865589 kubelet[3201]: I0425 00:00:55.861480 3201 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 00:00:55.865589 kubelet[3201]: I0425 00:00:55.862081 3201 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 00:00:55.871557 kubelet[3201]: I0425 00:00:55.871526 3201 factory.go:223] Registration of the containerd container factory successfully Apr 25 00:00:55.871557 kubelet[3201]: I0425 00:00:55.871553 3201 factory.go:223] Registration of the systemd container factory successfully Apr 25 00:00:55.871737 kubelet[3201]: I0425 00:00:55.871669 3201 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 00:00:55.872157 kubelet[3201]: E0425 00:00:55.872131 3201 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 00:00:55.900234 kubelet[3201]: I0425 00:00:55.899604 3201 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 25 00:00:55.901299 kubelet[3201]: I0425 00:00:55.901270 3201 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 25 00:00:55.901299 kubelet[3201]: I0425 00:00:55.901299 3201 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 25 00:00:55.901466 kubelet[3201]: I0425 00:00:55.901325 3201 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 00:00:55.901466 kubelet[3201]: I0425 00:00:55.901334 3201 kubelet.go:2436] "Starting kubelet main sync loop" Apr 25 00:00:55.901466 kubelet[3201]: E0425 00:00:55.901380 3201 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 00:00:55.936264 kubelet[3201]: I0425 00:00:55.936222 3201 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 25 00:00:55.936264 kubelet[3201]: I0425 00:00:55.936246 3201 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 25 00:00:55.936264 kubelet[3201]: I0425 00:00:55.936268 3201 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:00:55.936513 kubelet[3201]: I0425 00:00:55.936440 3201 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 25 00:00:55.936557 kubelet[3201]: I0425 00:00:55.936521 3201 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 25 00:00:55.936557 kubelet[3201]: I0425 00:00:55.936547 3201 policy_none.go:49] "None policy: Start" Apr 25 00:00:55.936655 kubelet[3201]: I0425 00:00:55.936564 3201 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 25 00:00:55.936655 kubelet[3201]: I0425 00:00:55.936579 3201 state_mem.go:35] "Initializing new in-memory state store" Apr 25 00:00:55.936730 kubelet[3201]: I0425 00:00:55.936707 3201 state_mem.go:75] "Updated machine memory state" Apr 25 00:00:55.941910 kubelet[3201]: E0425 00:00:55.941468 3201 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 00:00:55.941910 kubelet[3201]: I0425 00:00:55.941668 3201 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 25 00:00:55.941910 kubelet[3201]: I0425 00:00:55.941683 3201 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 00:00:55.942125 kubelet[3201]: I0425 00:00:55.941968 3201 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 25 00:00:55.946964 kubelet[3201]: E0425 00:00:55.945238 3201 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 00:00:56.005125 kubelet[3201]: I0425 00:00:56.005065 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:56.005854 kubelet[3201]: I0425 00:00:56.005628 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:56.007507 kubelet[3201]: I0425 00:00:56.007479 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:56.019394 kubelet[3201]: E0425 00:00:56.019346 3201 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-158\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:56.052720 kubelet[3201]: I0425 00:00:56.052458 3201 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-158" Apr 25 00:00:56.063258 kubelet[3201]: I0425 00:00:56.063202 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20685a2a1045c92882a93feb584e3fc-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-158\" (UID: \"b20685a2a1045c92882a93feb584e3fc\") " pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:56.063418 kubelet[3201]: I0425 00:00:56.063265 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:56.063418 kubelet[3201]: I0425 00:00:56.063293 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:56.063418 kubelet[3201]: I0425 00:00:56.063317 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61bbc1e688cbe3eb89ffb849e8552ba8-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-158\" (UID: \"61bbc1e688cbe3eb89ffb849e8552ba8\") " pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:56.063418 kubelet[3201]: I0425 00:00:56.063336 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20685a2a1045c92882a93feb584e3fc-ca-certs\") pod \"kube-apiserver-ip-172-31-27-158\" (UID: \"b20685a2a1045c92882a93feb584e3fc\") " pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:56.063418 kubelet[3201]: I0425 00:00:56.063359 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20685a2a1045c92882a93feb584e3fc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-158\" (UID: \"b20685a2a1045c92882a93feb584e3fc\") " pod="kube-system/kube-apiserver-ip-172-31-27-158" Apr 25 00:00:56.063678 kubelet[3201]: I0425 00:00:56.063379 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:56.063678 kubelet[3201]: I0425 00:00:56.063400 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:56.063678 kubelet[3201]: I0425 00:00:56.063420 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/678a2e9f4fa63c9a377f998fd3060081-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-158\" (UID: \"678a2e9f4fa63c9a377f998fd3060081\") " pod="kube-system/kube-controller-manager-ip-172-31-27-158" Apr 25 00:00:56.065727 kubelet[3201]: I0425 00:00:56.064221 3201 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-158" Apr 25 00:00:56.065727 kubelet[3201]: I0425 00:00:56.064329 3201 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-158" Apr 25 00:00:56.824054 kubelet[3201]: I0425 00:00:56.824004 3201 apiserver.go:52] "Watching apiserver" Apr 25 00:00:56.861607 kubelet[3201]: I0425 00:00:56.861555 3201 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 25 00:00:56.924182 kubelet[3201]: I0425 00:00:56.923775 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:56.938920 kubelet[3201]: E0425 00:00:56.938869 3201 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-158\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-158" Apr 25 00:00:56.981528 kubelet[3201]: I0425 00:00:56.981003 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-158" podStartSLOduration=0.980959275 podStartE2EDuration="980.959275ms" podCreationTimestamp="2026-04-25 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:00:56.962698661 +0000 UTC m=+1.249576163" watchObservedRunningTime="2026-04-25 00:00:56.980959275 +0000 UTC m=+1.267836782" Apr 25 00:00:56.994787 kubelet[3201]: I0425 00:00:56.994055 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-158" podStartSLOduration=0.994035812 podStartE2EDuration="994.035812ms" podCreationTimestamp="2026-04-25 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:00:56.982368154 +0000 UTC m=+1.269245659" watchObservedRunningTime="2026-04-25 00:00:56.994035812 +0000 UTC m=+1.280913312" Apr 25 00:00:56.996215 kubelet[3201]: I0425 00:00:56.996042 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-158" podStartSLOduration=3.996005446 podStartE2EDuration="3.996005446s" podCreationTimestamp="2026-04-25 00:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:00:56.994618583 +0000 UTC m=+1.281496091" watchObservedRunningTime="2026-04-25 00:00:56.996005446 +0000 UTC m=+1.282882951" Apr 25 00:01:00.751013 update_engine[1967]: I20260425 00:01:00.750916 1967 update_attempter.cc:509] Updating boot flags... Apr 25 00:01:00.823842 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3262) Apr 25 00:01:01.096557 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3263) Apr 25 00:01:01.201089 kubelet[3201]: I0425 00:01:01.201054 3201 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 25 00:01:01.201949 kubelet[3201]: I0425 00:01:01.201881 3201 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 25 00:01:01.202092 containerd[1990]: time="2026-04-25T00:01:01.201446854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 25 00:01:01.392899 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3263) Apr 25 00:01:02.525792 systemd[1]: Created slice kubepods-besteffort-pod42ae500e_84f1_4e99_94c2_bf2b0a04af34.slice - libcontainer container kubepods-besteffort-pod42ae500e_84f1_4e99_94c2_bf2b0a04af34.slice. Apr 25 00:01:02.611218 kubelet[3201]: I0425 00:01:02.611113 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42ae500e-84f1-4e99-94c2-bf2b0a04af34-kube-proxy\") pod \"kube-proxy-7tt2j\" (UID: \"42ae500e-84f1-4e99-94c2-bf2b0a04af34\") " pod="kube-system/kube-proxy-7tt2j" Apr 25 00:01:02.611218 kubelet[3201]: I0425 00:01:02.611173 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42ae500e-84f1-4e99-94c2-bf2b0a04af34-lib-modules\") pod \"kube-proxy-7tt2j\" (UID: \"42ae500e-84f1-4e99-94c2-bf2b0a04af34\") " pod="kube-system/kube-proxy-7tt2j" Apr 25 00:01:02.614424 kubelet[3201]: I0425 00:01:02.611250 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42ae500e-84f1-4e99-94c2-bf2b0a04af34-xtables-lock\") pod \"kube-proxy-7tt2j\" (UID: \"42ae500e-84f1-4e99-94c2-bf2b0a04af34\") " pod="kube-system/kube-proxy-7tt2j" Apr 25 00:01:02.614424 kubelet[3201]: I0425 00:01:02.611276 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5bd\" (UniqueName: \"kubernetes.io/projected/42ae500e-84f1-4e99-94c2-bf2b0a04af34-kube-api-access-rn5bd\") pod \"kube-proxy-7tt2j\" (UID: \"42ae500e-84f1-4e99-94c2-bf2b0a04af34\") " pod="kube-system/kube-proxy-7tt2j" Apr 25 00:01:02.866141 systemd[1]: Created slice kubepods-besteffort-podc70c6034_240c_48c4_a391_118af0f72156.slice - libcontainer container kubepods-besteffort-podc70c6034_240c_48c4_a391_118af0f72156.slice. Apr 25 00:01:02.871843 containerd[1990]: time="2026-04-25T00:01:02.869773854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7tt2j,Uid:42ae500e-84f1-4e99-94c2-bf2b0a04af34,Namespace:kube-system,Attempt:0,}" Apr 25 00:01:02.930869 kubelet[3201]: I0425 00:01:02.928181 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c70c6034-240c-48c4-a391-118af0f72156-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-zsfts\" (UID: \"c70c6034-240c-48c4-a391-118af0f72156\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zsfts" Apr 25 00:01:02.930869 kubelet[3201]: I0425 00:01:02.928260 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gsr\" (UniqueName: \"kubernetes.io/projected/c70c6034-240c-48c4-a391-118af0f72156-kube-api-access-g6gsr\") pod \"tigera-operator-6bf85f8dd-zsfts\" (UID: \"c70c6034-240c-48c4-a391-118af0f72156\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zsfts" Apr 25 00:01:03.094096 containerd[1990]: time="2026-04-25T00:01:03.090289941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:03.094096 containerd[1990]: time="2026-04-25T00:01:03.091750378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:03.094096 containerd[1990]: time="2026-04-25T00:01:03.091855231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:03.094096 containerd[1990]: time="2026-04-25T00:01:03.092758158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:03.205037 containerd[1990]: time="2026-04-25T00:01:03.204725529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zsfts,Uid:c70c6034-240c-48c4-a391-118af0f72156,Namespace:tigera-operator,Attempt:0,}" Apr 25 00:01:03.332239 systemd[1]: Started cri-containerd-476858105fd8486808d6ce1993fdd5fab2ff664bf05c108ee0b894fbeceb5ca7.scope - libcontainer container 476858105fd8486808d6ce1993fdd5fab2ff664bf05c108ee0b894fbeceb5ca7. Apr 25 00:01:03.379953 containerd[1990]: time="2026-04-25T00:01:03.379597958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7tt2j,Uid:42ae500e-84f1-4e99-94c2-bf2b0a04af34,Namespace:kube-system,Attempt:0,} returns sandbox id \"476858105fd8486808d6ce1993fdd5fab2ff664bf05c108ee0b894fbeceb5ca7\"" Apr 25 00:01:03.391611 containerd[1990]: time="2026-04-25T00:01:03.389018099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:03.391611 containerd[1990]: time="2026-04-25T00:01:03.389089281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:03.391611 containerd[1990]: time="2026-04-25T00:01:03.389106684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:03.391611 containerd[1990]: time="2026-04-25T00:01:03.389225767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:03.392620 containerd[1990]: time="2026-04-25T00:01:03.392579966Z" level=info msg="CreateContainer within sandbox \"476858105fd8486808d6ce1993fdd5fab2ff664bf05c108ee0b894fbeceb5ca7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 25 00:01:03.421669 systemd[1]: Started cri-containerd-512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c.scope - libcontainer container 512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c. Apr 25 00:01:03.432365 containerd[1990]: time="2026-04-25T00:01:03.432251636Z" level=info msg="CreateContainer within sandbox \"476858105fd8486808d6ce1993fdd5fab2ff664bf05c108ee0b894fbeceb5ca7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e6d455b682033c4e00c7a902c1b666016786cafcba4e0f0256008c8e36d17e8d\"" Apr 25 00:01:03.435483 containerd[1990]: time="2026-04-25T00:01:03.435408357Z" level=info msg="StartContainer for \"e6d455b682033c4e00c7a902c1b666016786cafcba4e0f0256008c8e36d17e8d\"" Apr 25 00:01:03.616560 systemd[1]: Started cri-containerd-e6d455b682033c4e00c7a902c1b666016786cafcba4e0f0256008c8e36d17e8d.scope - libcontainer container e6d455b682033c4e00c7a902c1b666016786cafcba4e0f0256008c8e36d17e8d. Apr 25 00:01:03.653016 containerd[1990]: time="2026-04-25T00:01:03.652492214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zsfts,Uid:c70c6034-240c-48c4-a391-118af0f72156,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c\"" Apr 25 00:01:03.663087 containerd[1990]: time="2026-04-25T00:01:03.659022191Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 25 00:01:03.738236 containerd[1990]: time="2026-04-25T00:01:03.738174061Z" level=info msg="StartContainer for \"e6d455b682033c4e00c7a902c1b666016786cafcba4e0f0256008c8e36d17e8d\" returns successfully" Apr 25 00:01:03.831773 systemd[1]: run-containerd-runc-k8s.io-476858105fd8486808d6ce1993fdd5fab2ff664bf05c108ee0b894fbeceb5ca7-runc.7jIks0.mount: Deactivated successfully. Apr 25 00:01:05.393420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501149930.mount: Deactivated successfully. Apr 25 00:01:07.942084 containerd[1990]: time="2026-04-25T00:01:07.942026221Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:07.943790 containerd[1990]: time="2026-04-25T00:01:07.943545861Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 25 00:01:07.945049 containerd[1990]: time="2026-04-25T00:01:07.944647314Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:07.947423 containerd[1990]: time="2026-04-25T00:01:07.947383939Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:07.949218 containerd[1990]: time="2026-04-25T00:01:07.948381879Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.289305963s" Apr 25 00:01:07.949218 containerd[1990]: time="2026-04-25T00:01:07.948424223Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 25 00:01:07.955765 containerd[1990]: time="2026-04-25T00:01:07.955717690Z" level=info msg="CreateContainer within sandbox \"512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 25 00:01:07.980407 containerd[1990]: time="2026-04-25T00:01:07.980351029Z" level=info msg="CreateContainer within sandbox \"512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb\"" Apr 25 00:01:07.982619 containerd[1990]: time="2026-04-25T00:01:07.981097825Z" level=info msg="StartContainer for \"324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb\"" Apr 25 00:01:08.025064 systemd[1]: Started cri-containerd-324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb.scope - libcontainer container 324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb. Apr 25 00:01:08.060840 containerd[1990]: time="2026-04-25T00:01:08.059670601Z" level=info msg="StartContainer for \"324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb\" returns successfully" Apr 25 00:01:08.144342 kubelet[3201]: I0425 00:01:08.144274 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7tt2j" podStartSLOduration=6.144252502 podStartE2EDuration="6.144252502s" podCreationTimestamp="2026-04-25 00:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:01:04.179831737 +0000 UTC m=+8.466709473" watchObservedRunningTime="2026-04-25 00:01:08.144252502 +0000 UTC m=+12.431130008" Apr 25 00:01:09.683827 kubelet[3201]: I0425 00:01:09.683556 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-zsfts" podStartSLOduration=3.389422285 podStartE2EDuration="7.683532491s" podCreationTimestamp="2026-04-25 00:01:02 +0000 UTC" firstStartedPulling="2026-04-25 00:01:03.655759886 +0000 UTC m=+7.942637390" lastFinishedPulling="2026-04-25 00:01:07.949870111 +0000 UTC m=+12.236747596" observedRunningTime="2026-04-25 00:01:08.144645069 +0000 UTC m=+12.431522575" watchObservedRunningTime="2026-04-25 00:01:09.683532491 +0000 UTC m=+13.970409995" Apr 25 00:01:13.606567 sudo[2313]: pam_unix(sudo:session): session closed for user root Apr 25 00:01:13.773017 sshd[2295]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:13.781027 systemd[1]: sshd@6-172.31.27.158:22-4.175.71.9:51398.service: Deactivated successfully. Apr 25 00:01:13.787149 systemd[1]: session-7.scope: Deactivated successfully. Apr 25 00:01:13.787606 systemd[1]: session-7.scope: Consumed 5.859s CPU time, 146.2M memory peak, 0B memory swap peak. Apr 25 00:01:13.791563 systemd-logind[1965]: Session 7 logged out. Waiting for processes to exit. Apr 25 00:01:13.795109 systemd-logind[1965]: Removed session 7. Apr 25 00:01:17.883641 systemd[1]: Created slice kubepods-besteffort-pod49bf6cae_6f92_4865_a622_4228c7cb55a9.slice - libcontainer container kubepods-besteffort-pod49bf6cae_6f92_4865_a622_4228c7cb55a9.slice. Apr 25 00:01:17.921216 kubelet[3201]: I0425 00:01:17.919983 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/49bf6cae-6f92-4865-a622-4228c7cb55a9-typha-certs\") pod \"calico-typha-8556fd6c84-bwxvw\" (UID: \"49bf6cae-6f92-4865-a622-4228c7cb55a9\") " pod="calico-system/calico-typha-8556fd6c84-bwxvw" Apr 25 00:01:17.922086 kubelet[3201]: I0425 00:01:17.921732 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgg9b\" (UniqueName: \"kubernetes.io/projected/49bf6cae-6f92-4865-a622-4228c7cb55a9-kube-api-access-mgg9b\") pod \"calico-typha-8556fd6c84-bwxvw\" (UID: \"49bf6cae-6f92-4865-a622-4228c7cb55a9\") " pod="calico-system/calico-typha-8556fd6c84-bwxvw" Apr 25 00:01:17.922086 kubelet[3201]: I0425 00:01:17.921965 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49bf6cae-6f92-4865-a622-4228c7cb55a9-tigera-ca-bundle\") pod \"calico-typha-8556fd6c84-bwxvw\" (UID: \"49bf6cae-6f92-4865-a622-4228c7cb55a9\") " pod="calico-system/calico-typha-8556fd6c84-bwxvw" Apr 25 00:01:18.052835 systemd[1]: Created slice kubepods-besteffort-podb0f74f72_6743_4341_8592_a3146dae5625.slice - libcontainer container kubepods-besteffort-podb0f74f72_6743_4341_8592_a3146dae5625.slice. Apr 25 00:01:18.129591 kubelet[3201]: I0425 00:01:18.129014 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-flexvol-driver-host\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.129591 kubelet[3201]: I0425 00:01:18.129085 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-var-lib-calico\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.129591 kubelet[3201]: I0425 00:01:18.129118 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-var-run-calico\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.129591 kubelet[3201]: I0425 00:01:18.129140 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-nodeproc\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.129591 kubelet[3201]: I0425 00:01:18.129166 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-policysync\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130116 kubelet[3201]: I0425 00:01:18.129189 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f74f72-6743-4341-8592-a3146dae5625-tigera-ca-bundle\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130116 kubelet[3201]: I0425 00:01:18.129212 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-cni-bin-dir\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130116 kubelet[3201]: I0425 00:01:18.129232 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-lib-modules\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130116 kubelet[3201]: I0425 00:01:18.129256 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-xtables-lock\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130116 kubelet[3201]: I0425 00:01:18.129278 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-bpffs\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130356 kubelet[3201]: I0425 00:01:18.129302 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-cni-net-dir\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130356 kubelet[3201]: I0425 00:01:18.129326 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-cni-log-dir\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130356 kubelet[3201]: I0425 00:01:18.129377 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b0f74f72-6743-4341-8592-a3146dae5625-sys-fs\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130356 kubelet[3201]: I0425 00:01:18.129399 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b0f74f72-6743-4341-8592-a3146dae5625-node-certs\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.130356 kubelet[3201]: I0425 00:01:18.129430 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmhh\" (UniqueName: \"kubernetes.io/projected/b0f74f72-6743-4341-8592-a3146dae5625-kube-api-access-tjmhh\") pod \"calico-node-lvd25\" (UID: \"b0f74f72-6743-4341-8592-a3146dae5625\") " pod="calico-system/calico-node-lvd25" Apr 25 00:01:18.143892 kubelet[3201]: E0425 00:01:18.143711 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:18.197400 containerd[1990]: time="2026-04-25T00:01:18.197002554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8556fd6c84-bwxvw,Uid:49bf6cae-6f92-4865-a622-4228c7cb55a9,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:18.232587 kubelet[3201]: I0425 00:01:18.230309 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2f1eba8-430b-4eb5-88b7-fcf647e52b8e-socket-dir\") pod \"csi-node-driver-8m4mt\" (UID: \"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e\") " pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:18.232587 kubelet[3201]: I0425 00:01:18.230395 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b2f1eba8-430b-4eb5-88b7-fcf647e52b8e-varrun\") pod \"csi-node-driver-8m4mt\" (UID: \"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e\") " pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:18.232587 kubelet[3201]: I0425 00:01:18.230520 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2f1eba8-430b-4eb5-88b7-fcf647e52b8e-registration-dir\") pod \"csi-node-driver-8m4mt\" (UID: \"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e\") " pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:18.232587 kubelet[3201]: I0425 00:01:18.230563 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjb72\" (UniqueName: \"kubernetes.io/projected/b2f1eba8-430b-4eb5-88b7-fcf647e52b8e-kube-api-access-xjb72\") pod \"csi-node-driver-8m4mt\" (UID: \"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e\") " pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:18.232587 kubelet[3201]: I0425 00:01:18.230610 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2f1eba8-430b-4eb5-88b7-fcf647e52b8e-kubelet-dir\") pod \"csi-node-driver-8m4mt\" (UID: \"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e\") " pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:18.248418 kubelet[3201]: E0425 00:01:18.247947 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.248418 kubelet[3201]: W0425 00:01:18.247999 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.248418 kubelet[3201]: E0425 00:01:18.248030 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.254923 kubelet[3201]: E0425 00:01:18.250350 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.254923 kubelet[3201]: W0425 00:01:18.250375 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.254923 kubelet[3201]: E0425 00:01:18.250412 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.299252 kubelet[3201]: E0425 00:01:18.299220 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.299429 kubelet[3201]: W0425 00:01:18.299411 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.299532 kubelet[3201]: E0425 00:01:18.299518 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.332171 kubelet[3201]: E0425 00:01:18.332140 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.332529 kubelet[3201]: W0425 00:01:18.332502 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.332843 kubelet[3201]: E0425 00:01:18.332751 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.335346 kubelet[3201]: E0425 00:01:18.335296 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.336897 kubelet[3201]: W0425 00:01:18.336858 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.337051 kubelet[3201]: E0425 00:01:18.337039 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.339044 kubelet[3201]: E0425 00:01:18.338502 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.339044 kubelet[3201]: W0425 00:01:18.338527 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.339044 kubelet[3201]: E0425 00:01:18.338555 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.340264 kubelet[3201]: E0425 00:01:18.340141 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.340264 kubelet[3201]: W0425 00:01:18.340157 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.340264 kubelet[3201]: E0425 00:01:18.340174 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.341303 kubelet[3201]: E0425 00:01:18.341217 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.341303 kubelet[3201]: W0425 00:01:18.341230 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.341303 kubelet[3201]: E0425 00:01:18.341244 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.345283 kubelet[3201]: E0425 00:01:18.342716 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.345283 kubelet[3201]: W0425 00:01:18.342732 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.345283 kubelet[3201]: E0425 00:01:18.342749 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.345283 kubelet[3201]: E0425 00:01:18.345122 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.345283 kubelet[3201]: W0425 00:01:18.345140 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.345283 kubelet[3201]: E0425 00:01:18.345161 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.345891 kubelet[3201]: E0425 00:01:18.345867 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.346359 kubelet[3201]: W0425 00:01:18.346083 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.346359 kubelet[3201]: E0425 00:01:18.346109 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.347307 kubelet[3201]: E0425 00:01:18.347291 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.348497 kubelet[3201]: W0425 00:01:18.348388 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.348497 kubelet[3201]: E0425 00:01:18.348423 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.349773 kubelet[3201]: E0425 00:01:18.349750 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.350161 kubelet[3201]: W0425 00:01:18.350074 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.350161 kubelet[3201]: E0425 00:01:18.350100 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.353971 kubelet[3201]: E0425 00:01:18.352905 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.353971 kubelet[3201]: W0425 00:01:18.352930 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.353971 kubelet[3201]: E0425 00:01:18.352946 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.354488 kubelet[3201]: E0425 00:01:18.354234 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.354488 kubelet[3201]: W0425 00:01:18.354247 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.354488 kubelet[3201]: E0425 00:01:18.354273 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.355948 kubelet[3201]: E0425 00:01:18.354724 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.355948 kubelet[3201]: W0425 00:01:18.354735 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.355948 kubelet[3201]: E0425 00:01:18.354747 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.358224 kubelet[3201]: E0425 00:01:18.356353 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.358224 kubelet[3201]: W0425 00:01:18.356365 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.358224 kubelet[3201]: E0425 00:01:18.356377 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.358224 kubelet[3201]: E0425 00:01:18.358090 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.358224 kubelet[3201]: W0425 00:01:18.358109 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.358224 kubelet[3201]: E0425 00:01:18.358127 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.358946 kubelet[3201]: E0425 00:01:18.358827 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.359234 kubelet[3201]: W0425 00:01:18.359107 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.360760 kubelet[3201]: E0425 00:01:18.360505 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.362017 kubelet[3201]: E0425 00:01:18.361652 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.362135 containerd[1990]: time="2026-04-25T00:01:18.361659732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvd25,Uid:b0f74f72-6743-4341-8592-a3146dae5625,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:18.363666 kubelet[3201]: W0425 00:01:18.363154 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.363666 kubelet[3201]: E0425 00:01:18.363185 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.364851 kubelet[3201]: E0425 00:01:18.364797 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.364851 kubelet[3201]: W0425 00:01:18.364828 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.365156 kubelet[3201]: E0425 00:01:18.365142 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.368959 kubelet[3201]: E0425 00:01:18.368307 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.368959 kubelet[3201]: W0425 00:01:18.368331 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.368959 kubelet[3201]: E0425 00:01:18.368352 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.372573 kubelet[3201]: E0425 00:01:18.372384 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.372573 kubelet[3201]: W0425 00:01:18.372419 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.372573 kubelet[3201]: E0425 00:01:18.372440 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.373260 kubelet[3201]: E0425 00:01:18.372917 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.373260 kubelet[3201]: W0425 00:01:18.372928 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.373260 kubelet[3201]: E0425 00:01:18.372942 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.375875 kubelet[3201]: E0425 00:01:18.374456 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.375875 kubelet[3201]: W0425 00:01:18.374471 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.375875 kubelet[3201]: E0425 00:01:18.374486 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.376873 kubelet[3201]: E0425 00:01:18.376706 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.376873 kubelet[3201]: W0425 00:01:18.376726 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.376873 kubelet[3201]: E0425 00:01:18.376741 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.379145 kubelet[3201]: E0425 00:01:18.378883 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.379145 kubelet[3201]: W0425 00:01:18.378901 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.379145 kubelet[3201]: E0425 00:01:18.378917 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.380985 kubelet[3201]: E0425 00:01:18.380208 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.380985 kubelet[3201]: W0425 00:01:18.380221 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.380985 kubelet[3201]: E0425 00:01:18.380246 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.385460 containerd[1990]: time="2026-04-25T00:01:18.384965796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:18.385460 containerd[1990]: time="2026-04-25T00:01:18.385070192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:18.385460 containerd[1990]: time="2026-04-25T00:01:18.385094884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:18.385460 containerd[1990]: time="2026-04-25T00:01:18.385232411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:18.413848 kubelet[3201]: E0425 00:01:18.410509 3201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:01:18.413848 kubelet[3201]: W0425 00:01:18.410537 3201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:01:18.413848 kubelet[3201]: E0425 00:01:18.410563 3201 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:01:18.473553 containerd[1990]: time="2026-04-25T00:01:18.471417210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:18.473553 containerd[1990]: time="2026-04-25T00:01:18.471508349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:18.473553 containerd[1990]: time="2026-04-25T00:01:18.471564990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:18.473553 containerd[1990]: time="2026-04-25T00:01:18.471721050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:18.475353 systemd[1]: Started cri-containerd-a0061eaf6d93551e86783f6b6d0e4fbaf5e3241a6e051f7f3ab5431bfa9606d6.scope - libcontainer container a0061eaf6d93551e86783f6b6d0e4fbaf5e3241a6e051f7f3ab5431bfa9606d6. Apr 25 00:01:18.543074 systemd[1]: Started cri-containerd-7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7.scope - libcontainer container 7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7. Apr 25 00:01:18.635419 containerd[1990]: time="2026-04-25T00:01:18.635119703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvd25,Uid:b0f74f72-6743-4341-8592-a3146dae5625,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\"" Apr 25 00:01:18.638094 containerd[1990]: time="2026-04-25T00:01:18.638046750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 25 00:01:18.678649 containerd[1990]: time="2026-04-25T00:01:18.678518060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8556fd6c84-bwxvw,Uid:49bf6cae-6f92-4865-a622-4228c7cb55a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0061eaf6d93551e86783f6b6d0e4fbaf5e3241a6e051f7f3ab5431bfa9606d6\"" Apr 25 00:01:19.903618 kubelet[3201]: E0425 00:01:19.902158 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:20.025489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413482031.mount: Deactivated successfully. Apr 25 00:01:20.184195 containerd[1990]: time="2026-04-25T00:01:20.184062408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:20.185839 containerd[1990]: time="2026-04-25T00:01:20.185676766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 25 00:01:20.187505 containerd[1990]: time="2026-04-25T00:01:20.187450922Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:20.192452 containerd[1990]: time="2026-04-25T00:01:20.192000213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:20.193406 containerd[1990]: time="2026-04-25T00:01:20.193355229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.555265407s" Apr 25 00:01:20.193406 containerd[1990]: time="2026-04-25T00:01:20.193405537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 25 00:01:20.196416 containerd[1990]: time="2026-04-25T00:01:20.196191202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 25 00:01:20.200984 containerd[1990]: time="2026-04-25T00:01:20.200943675Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 25 00:01:20.265149 containerd[1990]: time="2026-04-25T00:01:20.264678562Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97\"" Apr 25 00:01:20.273325 containerd[1990]: time="2026-04-25T00:01:20.273251439Z" level=info msg="StartContainer for \"384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97\"" Apr 25 00:01:20.333178 systemd[1]: run-containerd-runc-k8s.io-384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97-runc.AHc3Kq.mount: Deactivated successfully. Apr 25 00:01:20.343202 systemd[1]: Started cri-containerd-384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97.scope - libcontainer container 384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97. Apr 25 00:01:20.384598 containerd[1990]: time="2026-04-25T00:01:20.384523466Z" level=info msg="StartContainer for \"384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97\" returns successfully" Apr 25 00:01:20.398041 systemd[1]: cri-containerd-384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97.scope: Deactivated successfully. Apr 25 00:01:20.540424 containerd[1990]: time="2026-04-25T00:01:20.505690890Z" level=info msg="shim disconnected" id=384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97 namespace=k8s.io Apr 25 00:01:20.540760 containerd[1990]: time="2026-04-25T00:01:20.540431680Z" level=warning msg="cleaning up after shim disconnected" id=384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97 namespace=k8s.io Apr 25 00:01:20.540760 containerd[1990]: time="2026-04-25T00:01:20.540452208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:21.231152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-384b3530827a63e4c06da7af7e2f17faca79922f7ffe9105e85da344cab76e97-rootfs.mount: Deactivated successfully. Apr 25 00:01:21.902538 kubelet[3201]: E0425 00:01:21.902413 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:23.163656 containerd[1990]: time="2026-04-25T00:01:23.162498398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:23.165177 containerd[1990]: time="2026-04-25T00:01:23.164947221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 25 00:01:23.167283 containerd[1990]: time="2026-04-25T00:01:23.167237476Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:23.171783 containerd[1990]: time="2026-04-25T00:01:23.171680421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:23.173298 containerd[1990]: time="2026-04-25T00:01:23.173147434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.976916409s" Apr 25 00:01:23.173298 containerd[1990]: time="2026-04-25T00:01:23.173197996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 25 00:01:23.175524 containerd[1990]: time="2026-04-25T00:01:23.175275623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 25 00:01:23.220098 containerd[1990]: time="2026-04-25T00:01:23.219892660Z" level=info msg="CreateContainer within sandbox \"a0061eaf6d93551e86783f6b6d0e4fbaf5e3241a6e051f7f3ab5431bfa9606d6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 25 00:01:23.269313 containerd[1990]: time="2026-04-25T00:01:23.269256196Z" level=info msg="CreateContainer within sandbox \"a0061eaf6d93551e86783f6b6d0e4fbaf5e3241a6e051f7f3ab5431bfa9606d6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cf45055fbd8a167329219f6a596b41031370d87c2005b14ed24f18294f947475\"" Apr 25 00:01:23.271832 containerd[1990]: time="2026-04-25T00:01:23.270072512Z" level=info msg="StartContainer for \"cf45055fbd8a167329219f6a596b41031370d87c2005b14ed24f18294f947475\"" Apr 25 00:01:23.354036 systemd[1]: Started cri-containerd-cf45055fbd8a167329219f6a596b41031370d87c2005b14ed24f18294f947475.scope - libcontainer container cf45055fbd8a167329219f6a596b41031370d87c2005b14ed24f18294f947475. Apr 25 00:01:23.522782 containerd[1990]: time="2026-04-25T00:01:23.521934050Z" level=info msg="StartContainer for \"cf45055fbd8a167329219f6a596b41031370d87c2005b14ed24f18294f947475\" returns successfully" Apr 25 00:01:23.903392 kubelet[3201]: E0425 00:01:23.903216 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:25.187187 kubelet[3201]: I0425 00:01:25.187146 3201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:01:25.905014 kubelet[3201]: E0425 00:01:25.904699 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:27.904158 kubelet[3201]: E0425 00:01:27.902881 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:29.902938 kubelet[3201]: E0425 00:01:29.902459 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:31.903447 kubelet[3201]: E0425 00:01:31.902072 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:33.903896 kubelet[3201]: E0425 00:01:33.902458 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:35.683543 kubelet[3201]: I0425 00:01:35.683439 3201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:01:35.807709 kubelet[3201]: I0425 00:01:35.752733 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8556fd6c84-bwxvw" podStartSLOduration=14.233923580999999 podStartE2EDuration="18.726894042s" podCreationTimestamp="2026-04-25 00:01:17 +0000 UTC" firstStartedPulling="2026-04-25 00:01:18.681703151 +0000 UTC m=+22.968580653" lastFinishedPulling="2026-04-25 00:01:23.174673625 +0000 UTC m=+27.461551114" observedRunningTime="2026-04-25 00:01:24.198147942 +0000 UTC m=+28.485025448" watchObservedRunningTime="2026-04-25 00:01:35.726894042 +0000 UTC m=+40.013771548" Apr 25 00:01:35.906330 kubelet[3201]: E0425 00:01:35.906280 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:36.956666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217639374.mount: Deactivated successfully. Apr 25 00:01:37.030377 containerd[1990]: time="2026-04-25T00:01:37.022543003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:37.033784 containerd[1990]: time="2026-04-25T00:01:37.033420087Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:37.036045 containerd[1990]: time="2026-04-25T00:01:37.025605092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 25 00:01:37.040858 containerd[1990]: time="2026-04-25T00:01:37.040717434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:37.042638 containerd[1990]: time="2026-04-25T00:01:37.042585088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 13.867247282s" Apr 25 00:01:37.042638 containerd[1990]: time="2026-04-25T00:01:37.042636498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 25 00:01:37.050124 containerd[1990]: time="2026-04-25T00:01:37.050050897Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 25 00:01:37.100976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461179133.mount: Deactivated successfully. Apr 25 00:01:37.102300 containerd[1990]: time="2026-04-25T00:01:37.102124138Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce\"" Apr 25 00:01:37.102930 containerd[1990]: time="2026-04-25T00:01:37.102893506Z" level=info msg="StartContainer for \"ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce\"" Apr 25 00:01:37.229047 systemd[1]: Started cri-containerd-ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce.scope - libcontainer container ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce. Apr 25 00:01:37.311180 containerd[1990]: time="2026-04-25T00:01:37.311101367Z" level=info msg="StartContainer for \"ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce\" returns successfully" Apr 25 00:01:37.390666 systemd[1]: cri-containerd-ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce.scope: Deactivated successfully. Apr 25 00:01:37.445742 containerd[1990]: time="2026-04-25T00:01:37.435901793Z" level=info msg="shim disconnected" id=ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce namespace=k8s.io Apr 25 00:01:37.446397 containerd[1990]: time="2026-04-25T00:01:37.445745123Z" level=warning msg="cleaning up after shim disconnected" id=ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce namespace=k8s.io Apr 25 00:01:37.446397 containerd[1990]: time="2026-04-25T00:01:37.445778814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:37.903550 kubelet[3201]: E0425 00:01:37.902024 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:37.957382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab3a7abc1e52c502bce46aba26b9b8487fed758b0d55e0b67f500492d4d8c7ce-rootfs.mount: Deactivated successfully. Apr 25 00:01:38.276823 containerd[1990]: time="2026-04-25T00:01:38.276334329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 25 00:01:39.914948 kubelet[3201]: E0425 00:01:39.914879 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:41.310191 containerd[1990]: time="2026-04-25T00:01:41.309995531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:41.311313 containerd[1990]: time="2026-04-25T00:01:41.311258382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 25 00:01:41.312442 containerd[1990]: time="2026-04-25T00:01:41.312012308Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:41.316207 containerd[1990]: time="2026-04-25T00:01:41.316144356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:41.320014 containerd[1990]: time="2026-04-25T00:01:41.319963287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.04357914s" Apr 25 00:01:41.320414 containerd[1990]: time="2026-04-25T00:01:41.320212590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 25 00:01:41.328566 containerd[1990]: time="2026-04-25T00:01:41.328511304Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 25 00:01:41.349280 containerd[1990]: time="2026-04-25T00:01:41.349123545Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b\"" Apr 25 00:01:41.354654 containerd[1990]: time="2026-04-25T00:01:41.354425732Z" level=info msg="StartContainer for \"677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b\"" Apr 25 00:01:41.397723 systemd[1]: run-containerd-runc-k8s.io-677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b-runc.mQ5RMo.mount: Deactivated successfully. Apr 25 00:01:41.406065 systemd[1]: Started cri-containerd-677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b.scope - libcontainer container 677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b. Apr 25 00:01:41.452782 containerd[1990]: time="2026-04-25T00:01:41.452723136Z" level=info msg="StartContainer for \"677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b\" returns successfully" Apr 25 00:01:41.906292 kubelet[3201]: E0425 00:01:41.906086 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:42.423347 systemd[1]: cri-containerd-677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b.scope: Deactivated successfully. Apr 25 00:01:42.465699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b-rootfs.mount: Deactivated successfully. Apr 25 00:01:42.476646 containerd[1990]: time="2026-04-25T00:01:42.476393020Z" level=info msg="shim disconnected" id=677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b namespace=k8s.io Apr 25 00:01:42.476646 containerd[1990]: time="2026-04-25T00:01:42.476461333Z" level=warning msg="cleaning up after shim disconnected" id=677c02db41635bf7f4d06d5bddfc8d9cd32efbcae2e8c816524cc1fad1aa897b namespace=k8s.io Apr 25 00:01:42.476646 containerd[1990]: time="2026-04-25T00:01:42.476473174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:42.548359 kubelet[3201]: I0425 00:01:42.546978 3201 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 25 00:01:42.695912 kubelet[3201]: I0425 00:01:42.695401 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91a98ef7-481a-4c28-830d-88f976ac72ee-config-volume\") pod \"coredns-674b8bbfcf-fj8v9\" (UID: \"91a98ef7-481a-4c28-830d-88f976ac72ee\") " pod="kube-system/coredns-674b8bbfcf-fj8v9" Apr 25 00:01:42.695912 kubelet[3201]: I0425 00:01:42.695543 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-ca-bundle\") pod \"whisker-fb5969844-jxrhx\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " pod="calico-system/whisker-fb5969844-jxrhx" Apr 25 00:01:42.695912 kubelet[3201]: I0425 00:01:42.695569 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqr87\" (UniqueName: \"kubernetes.io/projected/e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683-kube-api-access-qqr87\") pod \"coredns-674b8bbfcf-7qhnl\" (UID: \"e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683\") " pod="kube-system/coredns-674b8bbfcf-7qhnl" Apr 25 00:01:42.695912 kubelet[3201]: I0425 00:01:42.695606 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683-config-volume\") pod \"coredns-674b8bbfcf-7qhnl\" (UID: \"e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683\") " pod="kube-system/coredns-674b8bbfcf-7qhnl" Apr 25 00:01:42.695912 kubelet[3201]: I0425 00:01:42.695632 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-nginx-config\") pod \"whisker-fb5969844-jxrhx\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " pod="calico-system/whisker-fb5969844-jxrhx" Apr 25 00:01:42.696252 kubelet[3201]: I0425 00:01:42.695653 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-backend-key-pair\") pod \"whisker-fb5969844-jxrhx\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " pod="calico-system/whisker-fb5969844-jxrhx" Apr 25 00:01:42.696252 kubelet[3201]: I0425 00:01:42.695678 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfdls\" (UniqueName: \"kubernetes.io/projected/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-kube-api-access-pfdls\") pod \"whisker-fb5969844-jxrhx\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " pod="calico-system/whisker-fb5969844-jxrhx" Apr 25 00:01:42.696252 kubelet[3201]: I0425 00:01:42.695708 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jhz\" (UniqueName: \"kubernetes.io/projected/91a98ef7-481a-4c28-830d-88f976ac72ee-kube-api-access-q9jhz\") pod \"coredns-674b8bbfcf-fj8v9\" (UID: \"91a98ef7-481a-4c28-830d-88f976ac72ee\") " pod="kube-system/coredns-674b8bbfcf-fj8v9" Apr 25 00:01:42.766071 systemd[1]: Created slice kubepods-besteffort-pod5e3e11ca_fc9a_44c6_aa39_3461e24bb47d.slice - libcontainer container kubepods-besteffort-pod5e3e11ca_fc9a_44c6_aa39_3461e24bb47d.slice. Apr 25 00:01:42.778957 systemd[1]: Created slice kubepods-besteffort-podfd5cb78e_5eeb_47d8_bac1_f83b8ac68c9f.slice - libcontainer container kubepods-besteffort-podfd5cb78e_5eeb_47d8_bac1_f83b8ac68c9f.slice. Apr 25 00:01:42.793692 systemd[1]: Created slice kubepods-burstable-pode2dc8fd3_53ee_4b31_8d4f_cbcd7d64f683.slice - libcontainer container kubepods-burstable-pode2dc8fd3_53ee_4b31_8d4f_cbcd7d64f683.slice. Apr 25 00:01:42.799089 kubelet[3201]: I0425 00:01:42.799025 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qzl\" (UniqueName: \"kubernetes.io/projected/fbf459d9-c3ed-42cc-9f78-25b84022bdb0-kube-api-access-f2qzl\") pod \"calico-apiserver-69c6c7bbcf-jntps\" (UID: \"fbf459d9-c3ed-42cc-9f78-25b84022bdb0\") " pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" Apr 25 00:01:42.799089 kubelet[3201]: I0425 00:01:42.799086 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/22e1b7c8-1a20-4649-bf8c-3b2a82e5872a-goldmane-key-pair\") pod \"goldmane-5b85766d88-rcpfl\" (UID: \"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a\") " pod="calico-system/goldmane-5b85766d88-rcpfl" Apr 25 00:01:42.799274 kubelet[3201]: I0425 00:01:42.799116 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bh2b\" (UniqueName: \"kubernetes.io/projected/04413392-8f1c-4eff-8af3-8c2e64b92e0c-kube-api-access-9bh2b\") pod \"calico-apiserver-69c6c7bbcf-8gdvg\" (UID: \"04413392-8f1c-4eff-8af3-8c2e64b92e0c\") " pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" Apr 25 00:01:42.799274 kubelet[3201]: I0425 00:01:42.799158 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wwx\" (UniqueName: \"kubernetes.io/projected/fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f-kube-api-access-p8wwx\") pod \"calico-kube-controllers-577c9d7cc5-qb9xm\" (UID: \"fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f\") " pod="calico-system/calico-kube-controllers-577c9d7cc5-qb9xm" Apr 25 00:01:42.799274 kubelet[3201]: I0425 00:01:42.799245 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22e1b7c8-1a20-4649-bf8c-3b2a82e5872a-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-rcpfl\" (UID: \"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a\") " pod="calico-system/goldmane-5b85766d88-rcpfl" Apr 25 00:01:42.799468 kubelet[3201]: I0425 00:01:42.799313 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcqr\" (UniqueName: \"kubernetes.io/projected/22e1b7c8-1a20-4649-bf8c-3b2a82e5872a-kube-api-access-nlcqr\") pod \"goldmane-5b85766d88-rcpfl\" (UID: \"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a\") " pod="calico-system/goldmane-5b85766d88-rcpfl" Apr 25 00:01:42.799468 kubelet[3201]: I0425 00:01:42.799379 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fbf459d9-c3ed-42cc-9f78-25b84022bdb0-calico-apiserver-certs\") pod \"calico-apiserver-69c6c7bbcf-jntps\" (UID: \"fbf459d9-c3ed-42cc-9f78-25b84022bdb0\") " pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" Apr 25 00:01:42.799468 kubelet[3201]: I0425 00:01:42.799404 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e1b7c8-1a20-4649-bf8c-3b2a82e5872a-config\") pod \"goldmane-5b85766d88-rcpfl\" (UID: \"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a\") " pod="calico-system/goldmane-5b85766d88-rcpfl" Apr 25 00:01:42.799468 kubelet[3201]: I0425 00:01:42.799428 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04413392-8f1c-4eff-8af3-8c2e64b92e0c-calico-apiserver-certs\") pod \"calico-apiserver-69c6c7bbcf-8gdvg\" (UID: \"04413392-8f1c-4eff-8af3-8c2e64b92e0c\") " pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" Apr 25 00:01:42.799468 kubelet[3201]: I0425 00:01:42.799453 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f-tigera-ca-bundle\") pod \"calico-kube-controllers-577c9d7cc5-qb9xm\" (UID: \"fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f\") " pod="calico-system/calico-kube-controllers-577c9d7cc5-qb9xm" Apr 25 00:01:42.859140 systemd[1]: Created slice kubepods-besteffort-podfbf459d9_c3ed_42cc_9f78_25b84022bdb0.slice - libcontainer container kubepods-besteffort-podfbf459d9_c3ed_42cc_9f78_25b84022bdb0.slice. Apr 25 00:01:42.887424 systemd[1]: Created slice kubepods-besteffort-pod04413392_8f1c_4eff_8af3_8c2e64b92e0c.slice - libcontainer container kubepods-besteffort-pod04413392_8f1c_4eff_8af3_8c2e64b92e0c.slice. Apr 25 00:01:42.903444 systemd[1]: Created slice kubepods-besteffort-pod22e1b7c8_1a20_4649_bf8c_3b2a82e5872a.slice - libcontainer container kubepods-besteffort-pod22e1b7c8_1a20_4649_bf8c_3b2a82e5872a.slice. Apr 25 00:01:42.954067 systemd[1]: Created slice kubepods-burstable-pod91a98ef7_481a_4c28_830d_88f976ac72ee.slice - libcontainer container kubepods-burstable-pod91a98ef7_481a_4c28_830d_88f976ac72ee.slice. Apr 25 00:01:42.974927 containerd[1990]: time="2026-04-25T00:01:42.974584587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fj8v9,Uid:91a98ef7-481a-4c28-830d-88f976ac72ee,Namespace:kube-system,Attempt:0,}" Apr 25 00:01:42.977647 containerd[1990]: time="2026-04-25T00:01:42.977185198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rcpfl,Uid:22e1b7c8-1a20-4649-bf8c-3b2a82e5872a,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:43.098351 containerd[1990]: time="2026-04-25T00:01:43.097093417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577c9d7cc5-qb9xm,Uid:fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:43.124743 containerd[1990]: time="2026-04-25T00:01:43.123469144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qhnl,Uid:e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683,Namespace:kube-system,Attempt:0,}" Apr 25 00:01:43.124743 containerd[1990]: time="2026-04-25T00:01:43.124042797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb5969844-jxrhx,Uid:5e3e11ca-fc9a-44c6-aa39-3461e24bb47d,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:43.189362 containerd[1990]: time="2026-04-25T00:01:43.188973734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-jntps,Uid:fbf459d9-c3ed-42cc-9f78-25b84022bdb0,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:43.235680 containerd[1990]: time="2026-04-25T00:01:43.235607500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-8gdvg,Uid:04413392-8f1c-4eff-8af3-8c2e64b92e0c,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:43.340564 containerd[1990]: time="2026-04-25T00:01:43.340506312Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 25 00:01:43.386127 containerd[1990]: time="2026-04-25T00:01:43.385967379Z" level=info msg="CreateContainer within sandbox \"7fed17a0843d287a7338fae8a5d85990a9b1fc970d22370794a1a0c3b1ad64d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0cf596245441012b1f08236d4487e42ec1a3f46ed9bb34856f70a4b08baedc80\"" Apr 25 00:01:43.389699 containerd[1990]: time="2026-04-25T00:01:43.388231921Z" level=info msg="StartContainer for \"0cf596245441012b1f08236d4487e42ec1a3f46ed9bb34856f70a4b08baedc80\"" Apr 25 00:01:43.442238 systemd[1]: Started cri-containerd-0cf596245441012b1f08236d4487e42ec1a3f46ed9bb34856f70a4b08baedc80.scope - libcontainer container 0cf596245441012b1f08236d4487e42ec1a3f46ed9bb34856f70a4b08baedc80. Apr 25 00:01:43.607249 containerd[1990]: time="2026-04-25T00:01:43.606629342Z" level=info msg="StartContainer for \"0cf596245441012b1f08236d4487e42ec1a3f46ed9bb34856f70a4b08baedc80\" returns successfully" Apr 25 00:01:43.772175 containerd[1990]: time="2026-04-25T00:01:43.771956459Z" level=error msg="Failed to destroy network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.773388 containerd[1990]: time="2026-04-25T00:01:43.773241778Z" level=error msg="encountered an error cleaning up failed sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.773388 containerd[1990]: time="2026-04-25T00:01:43.773322698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577c9d7cc5-qb9xm,Uid:fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.778786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0-shm.mount: Deactivated successfully. Apr 25 00:01:43.814087 containerd[1990]: time="2026-04-25T00:01:43.814032028Z" level=error msg="Failed to destroy network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.814824 containerd[1990]: time="2026-04-25T00:01:43.814457863Z" level=error msg="encountered an error cleaning up failed sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.814824 containerd[1990]: time="2026-04-25T00:01:43.814531939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb5969844-jxrhx,Uid:5e3e11ca-fc9a-44c6-aa39-3461e24bb47d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.824089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8-shm.mount: Deactivated successfully. Apr 25 00:01:43.828063 containerd[1990]: time="2026-04-25T00:01:43.828005744Z" level=error msg="Failed to destroy network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.828844 containerd[1990]: time="2026-04-25T00:01:43.828668040Z" level=error msg="encountered an error cleaning up failed sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.828844 containerd[1990]: time="2026-04-25T00:01:43.828743392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qhnl,Uid:e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.838242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b-shm.mount: Deactivated successfully. Apr 25 00:01:43.846420 kubelet[3201]: E0425 00:01:43.846148 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.850406 kubelet[3201]: E0425 00:01:43.846222 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.850490 containerd[1990]: time="2026-04-25T00:01:43.849126724Z" level=error msg="Failed to destroy network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.851620 containerd[1990]: time="2026-04-25T00:01:43.851137290Z" level=error msg="encountered an error cleaning up failed sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.851620 containerd[1990]: time="2026-04-25T00:01:43.851226938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-jntps,Uid:fbf459d9-c3ed-42cc-9f78-25b84022bdb0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.851620 containerd[1990]: time="2026-04-25T00:01:43.851390618Z" level=error msg="Failed to destroy network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.853682 kubelet[3201]: E0425 00:01:43.851558 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7qhnl" Apr 25 00:01:43.853682 kubelet[3201]: E0425 00:01:43.852330 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-577c9d7cc5-qb9xm" Apr 25 00:01:43.858830 containerd[1990]: time="2026-04-25T00:01:43.858197589Z" level=error msg="encountered an error cleaning up failed sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.858830 containerd[1990]: time="2026-04-25T00:01:43.858290147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rcpfl,Uid:22e1b7c8-1a20-4649-bf8c-3b2a82e5872a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.858830 containerd[1990]: time="2026-04-25T00:01:43.858460146Z" level=error msg="Failed to destroy network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.860151 kubelet[3201]: E0425 00:01:43.858382 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-577c9d7cc5-qb9xm" Apr 25 00:01:43.860151 kubelet[3201]: E0425 00:01:43.859933 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-577c9d7cc5-qb9xm_calico-system(fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-577c9d7cc5-qb9xm_calico-system(fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-577c9d7cc5-qb9xm" podUID="fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f" Apr 25 00:01:43.860318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb-shm.mount: Deactivated successfully. Apr 25 00:01:43.860469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3-shm.mount: Deactivated successfully. Apr 25 00:01:43.861429 containerd[1990]: time="2026-04-25T00:01:43.861162003Z" level=error msg="Failed to destroy network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.863737 kubelet[3201]: E0425 00:01:43.860912 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.863737 kubelet[3201]: E0425 00:01:43.860967 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fb5969844-jxrhx" Apr 25 00:01:43.863737 kubelet[3201]: E0425 00:01:43.861093 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fb5969844-jxrhx" Apr 25 00:01:43.863929 kubelet[3201]: E0425 00:01:43.863512 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fb5969844-jxrhx_calico-system(5e3e11ca-fc9a-44c6-aa39-3461e24bb47d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fb5969844-jxrhx_calico-system(5e3e11ca-fc9a-44c6-aa39-3461e24bb47d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fb5969844-jxrhx" podUID="5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" Apr 25 00:01:43.863929 kubelet[3201]: E0425 00:01:43.863594 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7qhnl" Apr 25 00:01:43.864444 kubelet[3201]: E0425 00:01:43.864000 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.864444 kubelet[3201]: E0425 00:01:43.864046 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rcpfl" Apr 25 00:01:43.864444 kubelet[3201]: E0425 00:01:43.864087 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rcpfl" Apr 25 00:01:43.864583 kubelet[3201]: E0425 00:01:43.864087 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7qhnl_kube-system(e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7qhnl_kube-system(e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7qhnl" podUID="e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683" Apr 25 00:01:43.864583 kubelet[3201]: E0425 00:01:43.864141 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-rcpfl_calico-system(22e1b7c8-1a20-4649-bf8c-3b2a82e5872a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-rcpfl_calico-system(22e1b7c8-1a20-4649-bf8c-3b2a82e5872a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-rcpfl" podUID="22e1b7c8-1a20-4649-bf8c-3b2a82e5872a" Apr 25 00:01:43.864583 kubelet[3201]: E0425 00:01:43.864182 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.864839 kubelet[3201]: E0425 00:01:43.864230 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" Apr 25 00:01:43.864839 kubelet[3201]: E0425 00:01:43.864247 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" Apr 25 00:01:43.864839 kubelet[3201]: E0425 00:01:43.864323 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69c6c7bbcf-jntps_calico-system(fbf459d9-c3ed-42cc-9f78-25b84022bdb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69c6c7bbcf-jntps_calico-system(fbf459d9-c3ed-42cc-9f78-25b84022bdb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" podUID="fbf459d9-c3ed-42cc-9f78-25b84022bdb0" Apr 25 00:01:43.865962 containerd[1990]: time="2026-04-25T00:01:43.865714407Z" level=error msg="encountered an error cleaning up failed sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.865962 containerd[1990]: time="2026-04-25T00:01:43.865824415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fj8v9,Uid:91a98ef7-481a-4c28-830d-88f976ac72ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.867021 containerd[1990]: time="2026-04-25T00:01:43.866294964Z" level=error msg="encountered an error cleaning up failed sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.867021 containerd[1990]: time="2026-04-25T00:01:43.866355319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-8gdvg,Uid:04413392-8f1c-4eff-8af3-8c2e64b92e0c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.867110 kubelet[3201]: E0425 00:01:43.866030 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.867110 kubelet[3201]: E0425 00:01:43.866074 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fj8v9" Apr 25 00:01:43.867110 kubelet[3201]: E0425 00:01:43.866110 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fj8v9" Apr 25 00:01:43.867235 kubelet[3201]: E0425 00:01:43.866167 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fj8v9_kube-system(91a98ef7-481a-4c28-830d-88f976ac72ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fj8v9_kube-system(91a98ef7-481a-4c28-830d-88f976ac72ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fj8v9" podUID="91a98ef7-481a-4c28-830d-88f976ac72ee" Apr 25 00:01:43.867235 kubelet[3201]: E0425 00:01:43.866663 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:43.867235 kubelet[3201]: E0425 00:01:43.866703 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" Apr 25 00:01:43.867934 kubelet[3201]: E0425 00:01:43.866727 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" Apr 25 00:01:43.867934 kubelet[3201]: E0425 00:01:43.866793 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69c6c7bbcf-8gdvg_calico-system(04413392-8f1c-4eff-8af3-8c2e64b92e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69c6c7bbcf-8gdvg_calico-system(04413392-8f1c-4eff-8af3-8c2e64b92e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" podUID="04413392-8f1c-4eff-8af3-8c2e64b92e0c" Apr 25 00:01:43.911601 systemd[1]: Created slice kubepods-besteffort-podb2f1eba8_430b_4eb5_88b7_fcf647e52b8e.slice - libcontainer container kubepods-besteffort-podb2f1eba8_430b_4eb5_88b7_fcf647e52b8e.slice. Apr 25 00:01:43.914708 containerd[1990]: time="2026-04-25T00:01:43.914661626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m4mt,Uid:b2f1eba8-430b-4eb5-88b7-fcf647e52b8e,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:44.006232 containerd[1990]: time="2026-04-25T00:01:44.006055524Z" level=error msg="Failed to destroy network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.007184 containerd[1990]: time="2026-04-25T00:01:44.006986242Z" level=error msg="encountered an error cleaning up failed sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.007184 containerd[1990]: time="2026-04-25T00:01:44.007064551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m4mt,Uid:b2f1eba8-430b-4eb5-88b7-fcf647e52b8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.007483 kubelet[3201]: E0425 00:01:44.007440 3201 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.007569 kubelet[3201]: E0425 00:01:44.007513 3201 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:44.007569 kubelet[3201]: E0425 00:01:44.007545 3201 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8m4mt" Apr 25 00:01:44.007660 kubelet[3201]: E0425 00:01:44.007610 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8m4mt_calico-system(b2f1eba8-430b-4eb5-88b7-fcf647e52b8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8m4mt_calico-system(b2f1eba8-430b-4eb5-88b7-fcf647e52b8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8m4mt" podUID="b2f1eba8-430b-4eb5-88b7-fcf647e52b8e" Apr 25 00:01:44.290299 kubelet[3201]: I0425 00:01:44.289623 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:01:44.292317 kubelet[3201]: I0425 00:01:44.292281 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:44.331602 kubelet[3201]: I0425 00:01:44.331521 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:01:44.338893 kubelet[3201]: I0425 00:01:44.338847 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:44.367306 containerd[1990]: time="2026-04-25T00:01:44.367194681Z" level=info msg="StopPodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\"" Apr 25 00:01:44.369760 containerd[1990]: time="2026-04-25T00:01:44.369715815Z" level=info msg="Ensure that sandbox ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb in task-service has been cleanup successfully" Apr 25 00:01:44.374195 containerd[1990]: time="2026-04-25T00:01:44.373730852Z" level=info msg="StopPodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\"" Apr 25 00:01:44.376305 containerd[1990]: time="2026-04-25T00:01:44.376247068Z" level=info msg="Ensure that sandbox bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8 in task-service has been cleanup successfully" Apr 25 00:01:44.377850 containerd[1990]: time="2026-04-25T00:01:44.377162326Z" level=info msg="StopPodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\"" Apr 25 00:01:44.377850 containerd[1990]: time="2026-04-25T00:01:44.377532218Z" level=info msg="Ensure that sandbox 8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3 in task-service has been cleanup successfully" Apr 25 00:01:44.380754 containerd[1990]: time="2026-04-25T00:01:44.380698313Z" level=info msg="StopPodSandbox for \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\"" Apr 25 00:01:44.381186 containerd[1990]: time="2026-04-25T00:01:44.381145567Z" level=info msg="Ensure that sandbox 6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842 in task-service has been cleanup successfully" Apr 25 00:01:44.382252 kubelet[3201]: I0425 00:01:44.382219 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:01:44.383445 containerd[1990]: time="2026-04-25T00:01:44.383414595Z" level=info msg="StopPodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\"" Apr 25 00:01:44.383867 containerd[1990]: time="2026-04-25T00:01:44.383616571Z" level=info msg="Ensure that sandbox 4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b in task-service has been cleanup successfully" Apr 25 00:01:44.454882 kubelet[3201]: I0425 00:01:44.454612 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:44.460629 containerd[1990]: time="2026-04-25T00:01:44.459186045Z" level=info msg="StopPodSandbox for \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\"" Apr 25 00:01:44.475164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b-shm.mount: Deactivated successfully. Apr 25 00:01:44.475312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027-shm.mount: Deactivated successfully. Apr 25 00:01:44.536781 containerd[1990]: time="2026-04-25T00:01:44.536659883Z" level=info msg="Ensure that sandbox f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0 in task-service has been cleanup successfully" Apr 25 00:01:44.552152 kubelet[3201]: I0425 00:01:44.551512 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:01:44.555022 containerd[1990]: time="2026-04-25T00:01:44.554983476Z" level=info msg="StopPodSandbox for \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\"" Apr 25 00:01:44.555235 containerd[1990]: time="2026-04-25T00:01:44.555209665Z" level=info msg="Ensure that sandbox 2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b in task-service has been cleanup successfully" Apr 25 00:01:44.583465 containerd[1990]: time="2026-04-25T00:01:44.583409901Z" level=error msg="StopPodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" failed" error="failed to destroy network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.584206 kubelet[3201]: E0425 00:01:44.584045 3201 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:01:44.584206 kubelet[3201]: E0425 00:01:44.584109 3201 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b"} Apr 25 00:01:44.584206 kubelet[3201]: E0425 00:01:44.584177 3201 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04413392-8f1c-4eff-8af3-8c2e64b92e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:01:44.584783 kubelet[3201]: E0425 00:01:44.584207 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04413392-8f1c-4eff-8af3-8c2e64b92e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" podUID="04413392-8f1c-4eff-8af3-8c2e64b92e0c" Apr 25 00:01:44.584783 kubelet[3201]: I0425 00:01:44.584704 3201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:44.591250 containerd[1990]: time="2026-04-25T00:01:44.591195157Z" level=info msg="StopPodSandbox for \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\"" Apr 25 00:01:44.593000 containerd[1990]: time="2026-04-25T00:01:44.592919533Z" level=info msg="Ensure that sandbox 4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027 in task-service has been cleanup successfully" Apr 25 00:01:44.602891 containerd[1990]: time="2026-04-25T00:01:44.602718318Z" level=error msg="StopPodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" failed" error="failed to destroy network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.603518 kubelet[3201]: E0425 00:01:44.603328 3201 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:01:44.603518 kubelet[3201]: E0425 00:01:44.603387 3201 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3"} Apr 25 00:01:44.603518 kubelet[3201]: E0425 00:01:44.603448 3201 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:01:44.603518 kubelet[3201]: E0425 00:01:44.603480 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-rcpfl" podUID="22e1b7c8-1a20-4649-bf8c-3b2a82e5872a" Apr 25 00:01:44.700866 containerd[1990]: time="2026-04-25T00:01:44.700022353Z" level=error msg="StopPodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" failed" error="failed to destroy network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.701305 kubelet[3201]: E0425 00:01:44.700296 3201 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:44.701305 kubelet[3201]: E0425 00:01:44.700362 3201 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8"} Apr 25 00:01:44.701305 kubelet[3201]: E0425 00:01:44.700407 3201 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:01:44.701305 kubelet[3201]: E0425 00:01:44.700440 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fb5969844-jxrhx" podUID="5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" Apr 25 00:01:44.721792 kubelet[3201]: I0425 00:01:44.721281 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lvd25" podStartSLOduration=5.035109615 podStartE2EDuration="27.721252557s" podCreationTimestamp="2026-04-25 00:01:17 +0000 UTC" firstStartedPulling="2026-04-25 00:01:18.637542772 +0000 UTC m=+22.924420268" lastFinishedPulling="2026-04-25 00:01:41.323685728 +0000 UTC m=+45.610563210" observedRunningTime="2026-04-25 00:01:44.500030753 +0000 UTC m=+48.786908259" watchObservedRunningTime="2026-04-25 00:01:44.721252557 +0000 UTC m=+49.008130064" Apr 25 00:01:44.744060 containerd[1990]: time="2026-04-25T00:01:44.743908489Z" level=error msg="StopPodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" failed" error="failed to destroy network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:44.744466 kubelet[3201]: E0425 00:01:44.744192 3201 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:01:44.744466 kubelet[3201]: E0425 00:01:44.744251 3201 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb"} Apr 25 00:01:44.744466 kubelet[3201]: E0425 00:01:44.744299 3201 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbf459d9-c3ed-42cc-9f78-25b84022bdb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:01:44.744466 kubelet[3201]: E0425 00:01:44.744329 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbf459d9-c3ed-42cc-9f78-25b84022bdb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" podUID="fbf459d9-c3ed-42cc-9f78-25b84022bdb0" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:44.953 [INFO][4570] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:44.954 [INFO][4570] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" iface="eth0" netns="/var/run/netns/cni-1cebea59-78a5-cf84-4227-baaea2e62e5c" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4570] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" iface="eth0" netns="/var/run/netns/cni-1cebea59-78a5-cf84-4227-baaea2e62e5c" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4570] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" iface="eth0" netns="/var/run/netns/cni-1cebea59-78a5-cf84-4227-baaea2e62e5c" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4570] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4570] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.267 [INFO][4647] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.267 [INFO][4647] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.267 [INFO][4647] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.286 [WARNING][4647] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.286 [INFO][4647] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.289 [INFO][4647] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:45.298866 containerd[1990]: 2026-04-25 00:01:45.295 [INFO][4570] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:45.301201 containerd[1990]: time="2026-04-25T00:01:45.298967637Z" level=info msg="TearDown network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\" successfully" Apr 25 00:01:45.301201 containerd[1990]: time="2026-04-25T00:01:45.299005711Z" level=info msg="StopPodSandbox for \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\" returns successfully" Apr 25 00:01:45.301201 containerd[1990]: time="2026-04-25T00:01:45.300503401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m4mt,Uid:b2f1eba8-430b-4eb5-88b7-fcf647e52b8e,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:45.305782 systemd[1]: run-netns-cni\x2d1cebea59\x2d78a5\x2dcf84\x2d4227\x2dbaaea2e62e5c.mount: Deactivated successfully. Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:44.960 [INFO][4604] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:44.964 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" iface="eth0" netns="/var/run/netns/cni-7a3d041d-4fe9-1d23-fc97-4fe249a9495a" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:44.965 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" iface="eth0" netns="/var/run/netns/cni-7a3d041d-4fe9-1d23-fc97-4fe249a9495a" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:44.967 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" iface="eth0" netns="/var/run/netns/cni-7a3d041d-4fe9-1d23-fc97-4fe249a9495a" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:44.967 [INFO][4604] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:44.968 [INFO][4604] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.268 [INFO][4652] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.268 [INFO][4652] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.291 [INFO][4652] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.311 [WARNING][4652] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.312 [INFO][4652] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.320 [INFO][4652] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:45.351403 containerd[1990]: 2026-04-25 00:01:45.336 [INFO][4604] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:45.356682 containerd[1990]: time="2026-04-25T00:01:45.352690778Z" level=info msg="TearDown network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\" successfully" Apr 25 00:01:45.356682 containerd[1990]: time="2026-04-25T00:01:45.352723252Z" level=info msg="StopPodSandbox for \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\" returns successfully" Apr 25 00:01:45.356682 containerd[1990]: time="2026-04-25T00:01:45.353657545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577c9d7cc5-qb9xm,Uid:fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:44.973 [INFO][4614] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:44.973 [INFO][4614] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" iface="eth0" netns="/var/run/netns/cni-5266ed63-1909-fa63-efbe-d340a757f632" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:44.974 [INFO][4614] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" iface="eth0" netns="/var/run/netns/cni-5266ed63-1909-fa63-efbe-d340a757f632" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:44.980 [INFO][4614] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" iface="eth0" netns="/var/run/netns/cni-5266ed63-1909-fa63-efbe-d340a757f632" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:44.980 [INFO][4614] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:44.980 [INFO][4614] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.278 [INFO][4662] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.287 [INFO][4662] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.316 [INFO][4662] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.354 [WARNING][4662] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.354 [INFO][4662] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.360 [INFO][4662] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:45.386879 containerd[1990]: 2026-04-25 00:01:45.368 [INFO][4614] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:45.386879 containerd[1990]: time="2026-04-25T00:01:45.386315815Z" level=info msg="TearDown network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\" successfully" Apr 25 00:01:45.386879 containerd[1990]: time="2026-04-25T00:01:45.386347099Z" level=info msg="StopPodSandbox for \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\" returns successfully" Apr 25 00:01:45.390566 containerd[1990]: time="2026-04-25T00:01:45.388244507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fj8v9,Uid:91a98ef7-481a-4c28-830d-88f976ac72ee,Namespace:kube-system,Attempt:1,}" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:44.948 [INFO][4610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:44.949 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" iface="eth0" netns="/var/run/netns/cni-ddc35ef9-d7d8-4c5b-8184-843d2a97ac97" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:44.949 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" iface="eth0" netns="/var/run/netns/cni-ddc35ef9-d7d8-4c5b-8184-843d2a97ac97" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" iface="eth0" netns="/var/run/netns/cni-ddc35ef9-d7d8-4c5b-8184-843d2a97ac97" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:44.955 [INFO][4610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.287 [INFO][4646] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.288 [INFO][4646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.360 [INFO][4646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.375 [WARNING][4646] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.375 [INFO][4646] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.378 [INFO][4646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:45.401753 containerd[1990]: 2026-04-25 00:01:45.390 [INFO][4610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:01:45.402746 containerd[1990]: time="2026-04-25T00:01:45.402709843Z" level=info msg="TearDown network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\" successfully" Apr 25 00:01:45.403213 containerd[1990]: time="2026-04-25T00:01:45.403187320Z" level=info msg="StopPodSandbox for \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\" returns successfully" Apr 25 00:01:45.404233 containerd[1990]: time="2026-04-25T00:01:45.404181083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qhnl,Uid:e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683,Namespace:kube-system,Attempt:1,}" Apr 25 00:01:45.479396 systemd[1]: run-netns-cni\x2dddc35ef9\x2dd7d8\x2d4c5b\x2d8184\x2d843d2a97ac97.mount: Deactivated successfully. Apr 25 00:01:45.479526 systemd[1]: run-netns-cni\x2d5266ed63\x2d1909\x2dfa63\x2defbe\x2dd340a757f632.mount: Deactivated successfully. Apr 25 00:01:45.479613 systemd[1]: run-netns-cni\x2d7a3d041d\x2d4fe9\x2d1d23\x2dfc97\x2d4fe249a9495a.mount: Deactivated successfully. Apr 25 00:01:45.606961 containerd[1990]: time="2026-04-25T00:01:45.604948116Z" level=info msg="StopPodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\"" Apr 25 00:01:45.824663 systemd-networkd[1895]: cali4e0d6c572db: Link UP Apr 25 00:01:45.825634 systemd-networkd[1895]: cali4e0d6c572db: Gained carrier Apr 25 00:01:45.844250 (udev-worker)[4811]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.411 [ERROR][4688] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.443 [INFO][4688] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0 csi-node-driver- calico-system b2f1eba8-430b-4eb5-88b7-fcf647e52b8e 932 0 2026-04-25 00:01:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-158 csi-node-driver-8m4mt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e0d6c572db [] [] }} ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.443 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.598 [INFO][4726] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" HandleID="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.647 [INFO][4726] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" HandleID="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003585c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-158", "pod":"csi-node-driver-8m4mt", "timestamp":"2026-04-25 00:01:45.598768972 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000528f20)} Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.647 [INFO][4726] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.647 [INFO][4726] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.647 [INFO][4726] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.658 [INFO][4726] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.675 [INFO][4726] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.693 [INFO][4726] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.704 [INFO][4726] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.709 [INFO][4726] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.710 [INFO][4726] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.715 [INFO][4726] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090 Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.739 [INFO][4726] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.760 [INFO][4726] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.1/26] block=192.168.100.0/26 handle="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.761 [INFO][4726] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.1/26] handle="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" host="ip-172-31-27-158" Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.761 [INFO][4726] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:45.928874 containerd[1990]: 2026-04-25 00:01:45.762 [INFO][4726] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.1/26] IPv6=[] ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" HandleID="k8s-pod-network.c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.931388 containerd[1990]: 2026-04-25 00:01:45.772 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"csi-node-driver-8m4mt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e0d6c572db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:45.931388 containerd[1990]: 2026-04-25 00:01:45.776 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.1/32] ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.931388 containerd[1990]: 2026-04-25 00:01:45.776 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e0d6c572db ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.931388 containerd[1990]: 2026-04-25 00:01:45.822 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.931388 containerd[1990]: 2026-04-25 00:01:45.836 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090", Pod:"csi-node-driver-8m4mt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e0d6c572db", MAC:"ae:0b:6c:f5:19:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:45.931388 containerd[1990]: 2026-04-25 00:01:45.885 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090" Namespace="calico-system" Pod="csi-node-driver-8m4mt" WorkloadEndpoint="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:45.963557 (udev-worker)[4810]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:01:45.972026 systemd-networkd[1895]: calic3c969b68b3: Link UP Apr 25 00:01:45.972365 systemd-networkd[1895]: calic3c969b68b3: Gained carrier Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.520 [ERROR][4709] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.547 [INFO][4709] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0 coredns-674b8bbfcf- kube-system 91a98ef7-481a-4c28-830d-88f976ac72ee 934 0 2026-04-25 00:01:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-158 coredns-674b8bbfcf-fj8v9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3c969b68b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.547 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.769 [INFO][4749] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" HandleID="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.790 [INFO][4749] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" HandleID="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00069e080), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-158", "pod":"coredns-674b8bbfcf-fj8v9", "timestamp":"2026-04-25 00:01:45.769734181 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ffb80)} Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.790 [INFO][4749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.790 [INFO][4749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.790 [INFO][4749] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.798 [INFO][4749] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.815 [INFO][4749] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.847 [INFO][4749] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.888 [INFO][4749] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.897 [INFO][4749] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.897 [INFO][4749] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.909 [INFO][4749] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43 Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.930 [INFO][4749] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.945 [INFO][4749] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.2/26] block=192.168.100.0/26 handle="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.945 [INFO][4749] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.2/26] handle="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" host="ip-172-31-27-158" Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.945 [INFO][4749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:46.040562 containerd[1990]: 2026-04-25 00:01:45.947 [INFO][4749] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.2/26] IPv6=[] ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" HandleID="k8s-pod-network.234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.042511 containerd[1990]: 2026-04-25 00:01:45.954 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91a98ef7-481a-4c28-830d-88f976ac72ee", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"coredns-674b8bbfcf-fj8v9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c969b68b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:46.042511 containerd[1990]: 2026-04-25 00:01:45.954 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.2/32] ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.042511 containerd[1990]: 2026-04-25 00:01:45.955 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3c969b68b3 ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.042511 containerd[1990]: 2026-04-25 00:01:45.975 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.042511 containerd[1990]: 2026-04-25 00:01:45.984 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91a98ef7-481a-4c28-830d-88f976ac72ee", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43", Pod:"coredns-674b8bbfcf-fj8v9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c969b68b3", MAC:"3e:84:b2:24:e2:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:46.042511 containerd[1990]: 2026-04-25 00:01:46.019 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43" Namespace="kube-system" Pod="coredns-674b8bbfcf-fj8v9" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:46.079782 systemd-networkd[1895]: cali1304506ee53: Link UP Apr 25 00:01:46.081713 systemd-networkd[1895]: cali1304506ee53: Gained carrier Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.555 [ERROR][4702] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.579 [INFO][4702] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0 calico-kube-controllers-577c9d7cc5- calico-system fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f 933 0 2026-04-25 00:01:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:577c9d7cc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-158 calico-kube-controllers-577c9d7cc5-qb9xm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1304506ee53 [] [] }} ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.579 [INFO][4702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.856 [INFO][4756] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" HandleID="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.916 [INFO][4756] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" HandleID="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dc730), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-158", "pod":"calico-kube-controllers-577c9d7cc5-qb9xm", "timestamp":"2026-04-25 00:01:45.85693308 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000680000)} Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.916 [INFO][4756] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.948 [INFO][4756] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.948 [INFO][4756] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.960 [INFO][4756] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:45.989 [INFO][4756] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.006 [INFO][4756] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.010 [INFO][4756] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.021 [INFO][4756] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.021 [INFO][4756] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.029 [INFO][4756] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.041 [INFO][4756] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.061 [INFO][4756] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.3/26] block=192.168.100.0/26 handle="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.061 [INFO][4756] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.3/26] handle="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" host="ip-172-31-27-158" Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.061 [INFO][4756] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:46.118161 containerd[1990]: 2026-04-25 00:01:46.061 [INFO][4756] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.3/26] IPv6=[] ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" HandleID="k8s-pod-network.d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.119474 containerd[1990]: 2026-04-25 00:01:46.070 [INFO][4702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0", GenerateName:"calico-kube-controllers-577c9d7cc5-", Namespace:"calico-system", SelfLink:"", UID:"fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577c9d7cc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"calico-kube-controllers-577c9d7cc5-qb9xm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1304506ee53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:46.119474 containerd[1990]: 2026-04-25 00:01:46.073 [INFO][4702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.3/32] ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.119474 containerd[1990]: 2026-04-25 00:01:46.073 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1304506ee53 ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.119474 containerd[1990]: 2026-04-25 00:01:46.083 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.119474 containerd[1990]: 2026-04-25 00:01:46.084 [INFO][4702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0", GenerateName:"calico-kube-controllers-577c9d7cc5-", Namespace:"calico-system", SelfLink:"", UID:"fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577c9d7cc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed", Pod:"calico-kube-controllers-577c9d7cc5-qb9xm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1304506ee53", MAC:"d6:c9:4e:15:02:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:46.119474 containerd[1990]: 2026-04-25 00:01:46.109 [INFO][4702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed" Namespace="calico-system" Pod="calico-kube-controllers-577c9d7cc5-qb9xm" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:46.148997 containerd[1990]: time="2026-04-25T00:01:46.148848641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:46.148997 containerd[1990]: time="2026-04-25T00:01:46.148930768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:46.148997 containerd[1990]: time="2026-04-25T00:01:46.148965253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.149497 containerd[1990]: time="2026-04-25T00:01:46.149384430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.177022 containerd[1990]: time="2026-04-25T00:01:46.176902578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:46.177606 containerd[1990]: time="2026-04-25T00:01:46.177239638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:46.177606 containerd[1990]: time="2026-04-25T00:01:46.177462041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.178599 containerd[1990]: time="2026-04-25T00:01:46.178428128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.226523 systemd-networkd[1895]: calidd0fd434e65: Link UP Apr 25 00:01:46.226745 systemd-networkd[1895]: calidd0fd434e65: Gained carrier Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:45.869 [INFO][4773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:45.878 [INFO][4773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" iface="eth0" netns="/var/run/netns/cni-20f3c850-62c8-0e64-a679-098de8061a7a" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:45.879 [INFO][4773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" iface="eth0" netns="/var/run/netns/cni-20f3c850-62c8-0e64-a679-098de8061a7a" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:45.886 [INFO][4773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" iface="eth0" netns="/var/run/netns/cni-20f3c850-62c8-0e64-a679-098de8061a7a" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:45.886 [INFO][4773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:45.886 [INFO][4773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.170 [INFO][4817] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.171 [INFO][4817] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.192 [INFO][4817] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.214 [WARNING][4817] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.214 [INFO][4817] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.231 [INFO][4817] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:46.276384 containerd[1990]: 2026-04-25 00:01:46.254 [INFO][4773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:46.282787 containerd[1990]: time="2026-04-25T00:01:46.279108022Z" level=info msg="TearDown network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" successfully" Apr 25 00:01:46.282787 containerd[1990]: time="2026-04-25T00:01:46.279148420Z" level=info msg="StopPodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" returns successfully" Apr 25 00:01:46.287376 systemd[1]: run-netns-cni\x2d20f3c850\x2d62c8\x2d0e64\x2da679\x2d098de8061a7a.mount: Deactivated successfully. Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:45.638 [ERROR][4728] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:45.665 [INFO][4728] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0 coredns-674b8bbfcf- kube-system e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683 931 0 2026-04-25 00:01:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-158 coredns-674b8bbfcf-7qhnl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd0fd434e65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:45.665 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:45.919 [INFO][4786] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" HandleID="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:45.962 [INFO][4786] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" HandleID="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-158", "pod":"coredns-674b8bbfcf-7qhnl", "timestamp":"2026-04-25 00:01:45.919935872 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000388000)} Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:45.963 [INFO][4786] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.062 [INFO][4786] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.062 [INFO][4786] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.070 [INFO][4786] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.097 [INFO][4786] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.144 [INFO][4786] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.155 [INFO][4786] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.160 [INFO][4786] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.160 [INFO][4786] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.163 [INFO][4786] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.173 [INFO][4786] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.190 [INFO][4786] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.4/26] block=192.168.100.0/26 handle="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.192 [INFO][4786] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.4/26] handle="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" host="ip-172-31-27-158" Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.192 [INFO][4786] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:46.288068 containerd[1990]: 2026-04-25 00:01:46.192 [INFO][4786] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.4/26] IPv6=[] ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" HandleID="k8s-pod-network.5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.290016 containerd[1990]: 2026-04-25 00:01:46.213 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"coredns-674b8bbfcf-7qhnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd0fd434e65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:46.290016 containerd[1990]: 2026-04-25 00:01:46.216 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.4/32] ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.290016 containerd[1990]: 2026-04-25 00:01:46.217 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd0fd434e65 ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.290016 containerd[1990]: 2026-04-25 00:01:46.225 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.290016 containerd[1990]: 2026-04-25 00:01:46.239 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc", Pod:"coredns-674b8bbfcf-7qhnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd0fd434e65", MAC:"ca:bf:e3:6a:2b:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:46.290016 containerd[1990]: 2026-04-25 00:01:46.276 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qhnl" WorkloadEndpoint="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:01:46.304476 systemd[1]: Started cri-containerd-c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090.scope - libcontainer container c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090. Apr 25 00:01:46.311976 containerd[1990]: time="2026-04-25T00:01:46.303526653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:46.311976 containerd[1990]: time="2026-04-25T00:01:46.303623818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:46.311976 containerd[1990]: time="2026-04-25T00:01:46.303648795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.311976 containerd[1990]: time="2026-04-25T00:01:46.303762986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.330934 systemd[1]: Started cri-containerd-234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43.scope - libcontainer container 234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43. Apr 25 00:01:46.350021 kubelet[3201]: I0425 00:01:46.349973 3201 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-ca-bundle\") pod \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " Apr 25 00:01:46.355916 kubelet[3201]: I0425 00:01:46.350036 3201 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-nginx-config\") pod \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " Apr 25 00:01:46.355916 kubelet[3201]: I0425 00:01:46.350070 3201 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-backend-key-pair\") pod \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " Apr 25 00:01:46.355916 kubelet[3201]: I0425 00:01:46.350129 3201 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfdls\" (UniqueName: \"kubernetes.io/projected/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-kube-api-access-pfdls\") pod \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\" (UID: \"5e3e11ca-fc9a-44c6-aa39-3461e24bb47d\") " Apr 25 00:01:46.355290 systemd[1]: Started cri-containerd-d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed.scope - libcontainer container d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed. Apr 25 00:01:46.387136 containerd[1990]: time="2026-04-25T00:01:46.386650157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:46.387136 containerd[1990]: time="2026-04-25T00:01:46.386718642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:46.387136 containerd[1990]: time="2026-04-25T00:01:46.386752838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.387136 containerd[1990]: time="2026-04-25T00:01:46.386926555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:46.390072 kubelet[3201]: I0425 00:01:46.390005 3201 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" (UID: "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:01:46.390224 kubelet[3201]: I0425 00:01:46.375744 3201 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" (UID: "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:01:46.407088 kubelet[3201]: I0425 00:01:46.407035 3201 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" (UID: "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 25 00:01:46.407665 kubelet[3201]: I0425 00:01:46.407599 3201 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-kube-api-access-pfdls" (OuterVolumeSpecName: "kube-api-access-pfdls") pod "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" (UID: "5e3e11ca-fc9a-44c6-aa39-3461e24bb47d"). InnerVolumeSpecName "kube-api-access-pfdls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:01:46.425668 systemd[1]: Started cri-containerd-5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc.scope - libcontainer container 5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc. Apr 25 00:01:46.451042 kubelet[3201]: I0425 00:01:46.450999 3201 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-ca-bundle\") on node \"ip-172-31-27-158\" DevicePath \"\"" Apr 25 00:01:46.451042 kubelet[3201]: I0425 00:01:46.451043 3201 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-nginx-config\") on node \"ip-172-31-27-158\" DevicePath \"\"" Apr 25 00:01:46.451042 kubelet[3201]: I0425 00:01:46.451055 3201 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-whisker-backend-key-pair\") on node \"ip-172-31-27-158\" DevicePath \"\"" Apr 25 00:01:46.451469 kubelet[3201]: I0425 00:01:46.451068 3201 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfdls\" (UniqueName: \"kubernetes.io/projected/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d-kube-api-access-pfdls\") on node \"ip-172-31-27-158\" DevicePath \"\"" Apr 25 00:01:46.473527 containerd[1990]: time="2026-04-25T00:01:46.473452443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m4mt,Uid:b2f1eba8-430b-4eb5-88b7-fcf647e52b8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090\"" Apr 25 00:01:46.479497 systemd[1]: var-lib-kubelet-pods-5e3e11ca\x2dfc9a\x2d44c6\x2daa39\x2d3461e24bb47d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfdls.mount: Deactivated successfully. Apr 25 00:01:46.479609 systemd[1]: var-lib-kubelet-pods-5e3e11ca\x2dfc9a\x2d44c6\x2daa39\x2d3461e24bb47d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 25 00:01:46.494723 containerd[1990]: time="2026-04-25T00:01:46.493679631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 25 00:01:46.513358 containerd[1990]: time="2026-04-25T00:01:46.513303635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fj8v9,Uid:91a98ef7-481a-4c28-830d-88f976ac72ee,Namespace:kube-system,Attempt:1,} returns sandbox id \"234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43\"" Apr 25 00:01:46.524837 containerd[1990]: time="2026-04-25T00:01:46.523577467Z" level=info msg="CreateContainer within sandbox \"234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:01:46.615392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681981926.mount: Deactivated successfully. Apr 25 00:01:46.627711 containerd[1990]: time="2026-04-25T00:01:46.627579309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qhnl,Uid:e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683,Namespace:kube-system,Attempt:1,} returns sandbox id \"5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc\"" Apr 25 00:01:46.661465 containerd[1990]: time="2026-04-25T00:01:46.661295542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577c9d7cc5-qb9xm,Uid:fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f,Namespace:calico-system,Attempt:1,} returns sandbox id \"d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed\"" Apr 25 00:01:46.667938 containerd[1990]: time="2026-04-25T00:01:46.667778310Z" level=info msg="CreateContainer within sandbox \"234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6dcf57f405412a6e7a5fb22111366ca50916cc063309f3448801063ecad67c4\"" Apr 25 00:01:46.671214 containerd[1990]: time="2026-04-25T00:01:46.671159520Z" level=info msg="StartContainer for \"f6dcf57f405412a6e7a5fb22111366ca50916cc063309f3448801063ecad67c4\"" Apr 25 00:01:46.683325 containerd[1990]: time="2026-04-25T00:01:46.682983909Z" level=info msg="CreateContainer within sandbox \"5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:01:46.734208 systemd[1]: Removed slice kubepods-besteffort-pod5e3e11ca_fc9a_44c6_aa39_3461e24bb47d.slice - libcontainer container kubepods-besteffort-pod5e3e11ca_fc9a_44c6_aa39_3461e24bb47d.slice. Apr 25 00:01:46.759143 systemd[1]: Started cri-containerd-f6dcf57f405412a6e7a5fb22111366ca50916cc063309f3448801063ecad67c4.scope - libcontainer container f6dcf57f405412a6e7a5fb22111366ca50916cc063309f3448801063ecad67c4. Apr 25 00:01:46.763596 containerd[1990]: time="2026-04-25T00:01:46.763459145Z" level=info msg="CreateContainer within sandbox \"5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"178e4da05b92338551205ae7292ce6ef71628c0d1988a27e1b23e0b3c566852b\"" Apr 25 00:01:46.765736 containerd[1990]: time="2026-04-25T00:01:46.765276988Z" level=info msg="StartContainer for \"178e4da05b92338551205ae7292ce6ef71628c0d1988a27e1b23e0b3c566852b\"" Apr 25 00:01:46.872124 containerd[1990]: time="2026-04-25T00:01:46.871443441Z" level=info msg="StartContainer for \"f6dcf57f405412a6e7a5fb22111366ca50916cc063309f3448801063ecad67c4\" returns successfully" Apr 25 00:01:46.885073 systemd[1]: Started cri-containerd-178e4da05b92338551205ae7292ce6ef71628c0d1988a27e1b23e0b3c566852b.scope - libcontainer container 178e4da05b92338551205ae7292ce6ef71628c0d1988a27e1b23e0b3c566852b. Apr 25 00:01:46.932595 systemd[1]: Created slice kubepods-besteffort-pod1f38c29f_9a08_40d5_b03f_00d594455b05.slice - libcontainer container kubepods-besteffort-pod1f38c29f_9a08_40d5_b03f_00d594455b05.slice. Apr 25 00:01:46.961970 kubelet[3201]: I0425 00:01:46.961749 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f38c29f-9a08-40d5-b03f-00d594455b05-whisker-ca-bundle\") pod \"whisker-c48c75d7c-gqbmh\" (UID: \"1f38c29f-9a08-40d5-b03f-00d594455b05\") " pod="calico-system/whisker-c48c75d7c-gqbmh" Apr 25 00:01:46.962830 kubelet[3201]: I0425 00:01:46.962372 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6ch\" (UniqueName: \"kubernetes.io/projected/1f38c29f-9a08-40d5-b03f-00d594455b05-kube-api-access-dj6ch\") pod \"whisker-c48c75d7c-gqbmh\" (UID: \"1f38c29f-9a08-40d5-b03f-00d594455b05\") " pod="calico-system/whisker-c48c75d7c-gqbmh" Apr 25 00:01:46.962830 kubelet[3201]: I0425 00:01:46.962432 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f38c29f-9a08-40d5-b03f-00d594455b05-whisker-backend-key-pair\") pod \"whisker-c48c75d7c-gqbmh\" (UID: \"1f38c29f-9a08-40d5-b03f-00d594455b05\") " pod="calico-system/whisker-c48c75d7c-gqbmh" Apr 25 00:01:46.962830 kubelet[3201]: I0425 00:01:46.962488 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1f38c29f-9a08-40d5-b03f-00d594455b05-nginx-config\") pod \"whisker-c48c75d7c-gqbmh\" (UID: \"1f38c29f-9a08-40d5-b03f-00d594455b05\") " pod="calico-system/whisker-c48c75d7c-gqbmh" Apr 25 00:01:46.966575 containerd[1990]: time="2026-04-25T00:01:46.966175016Z" level=info msg="StartContainer for \"178e4da05b92338551205ae7292ce6ef71628c0d1988a27e1b23e0b3c566852b\" returns successfully" Apr 25 00:01:47.125997 systemd-networkd[1895]: calic3c969b68b3: Gained IPv6LL Apr 25 00:01:47.250900 containerd[1990]: time="2026-04-25T00:01:47.250850574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c48c75d7c-gqbmh,Uid:1f38c29f-9a08-40d5-b03f-00d594455b05,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:47.440406 systemd-networkd[1895]: cali4e0d6c572db: Gained IPv6LL Apr 25 00:01:47.441108 systemd-networkd[1895]: calidd0fd434e65: Gained IPv6LL Apr 25 00:01:47.644578 systemd-networkd[1895]: cali565f5fe350c: Link UP Apr 25 00:01:47.649742 systemd-networkd[1895]: cali565f5fe350c: Gained carrier Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.419 [ERROR][5202] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.484 [INFO][5202] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0 whisker-c48c75d7c- calico-system 1f38c29f-9a08-40d5-b03f-00d594455b05 982 0 2026-04-25 00:01:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c48c75d7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-27-158 whisker-c48c75d7c-gqbmh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali565f5fe350c [] [] }} ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.484 [INFO][5202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.552 [INFO][5216] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" HandleID="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Workload="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.573 [INFO][5216] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" HandleID="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Workload="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-158", "pod":"whisker-c48c75d7c-gqbmh", "timestamp":"2026-04-25 00:01:47.552625734 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005662c0)} Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.573 [INFO][5216] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.573 [INFO][5216] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.573 [INFO][5216] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.579 [INFO][5216] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.587 [INFO][5216] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.596 [INFO][5216] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.601 [INFO][5216] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.605 [INFO][5216] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.606 [INFO][5216] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.608 [INFO][5216] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.614 [INFO][5216] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.628 [INFO][5216] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.5/26] block=192.168.100.0/26 handle="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.628 [INFO][5216] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.5/26] handle="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" host="ip-172-31-27-158" Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.628 [INFO][5216] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:47.707474 containerd[1990]: 2026-04-25 00:01:47.628 [INFO][5216] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.5/26] IPv6=[] ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" HandleID="k8s-pod-network.563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Workload="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.714522 containerd[1990]: 2026-04-25 00:01:47.632 [INFO][5202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0", GenerateName:"whisker-c48c75d7c-", Namespace:"calico-system", SelfLink:"", UID:"1f38c29f-9a08-40d5-b03f-00d594455b05", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c48c75d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"whisker-c48c75d7c-gqbmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali565f5fe350c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:47.714522 containerd[1990]: 2026-04-25 00:01:47.632 [INFO][5202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.5/32] ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.714522 containerd[1990]: 2026-04-25 00:01:47.632 [INFO][5202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali565f5fe350c ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.714522 containerd[1990]: 2026-04-25 00:01:47.647 [INFO][5202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.714522 containerd[1990]: 2026-04-25 00:01:47.651 [INFO][5202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0", GenerateName:"whisker-c48c75d7c-", Namespace:"calico-system", SelfLink:"", UID:"1f38c29f-9a08-40d5-b03f-00d594455b05", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c48c75d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c", Pod:"whisker-c48c75d7c-gqbmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali565f5fe350c", MAC:"2a:32:7f:ac:42:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:47.714522 containerd[1990]: 2026-04-25 00:01:47.683 [INFO][5202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c" Namespace="calico-system" Pod="whisker-c48c75d7c-gqbmh" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--c48c75d7c--gqbmh-eth0" Apr 25 00:01:47.828669 kubelet[3201]: I0425 00:01:47.826716 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7qhnl" podStartSLOduration=45.826692677 podStartE2EDuration="45.826692677s" podCreationTimestamp="2026-04-25 00:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:01:47.811314326 +0000 UTC m=+52.098191831" watchObservedRunningTime="2026-04-25 00:01:47.826692677 +0000 UTC m=+52.113570189" Apr 25 00:01:47.864305 kubelet[3201]: I0425 00:01:47.863775 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fj8v9" podStartSLOduration=45.863751163 podStartE2EDuration="45.863751163s" podCreationTimestamp="2026-04-25 00:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:01:47.862127713 +0000 UTC m=+52.149005218" watchObservedRunningTime="2026-04-25 00:01:47.863751163 +0000 UTC m=+52.150628672" Apr 25 00:01:47.904653 containerd[1990]: time="2026-04-25T00:01:47.901380751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:47.904653 containerd[1990]: time="2026-04-25T00:01:47.903532173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:47.904653 containerd[1990]: time="2026-04-25T00:01:47.903559565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:47.904653 containerd[1990]: time="2026-04-25T00:01:47.903670089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:47.907859 kubelet[3201]: I0425 00:01:47.907318 3201 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3e11ca-fc9a-44c6-aa39-3461e24bb47d" path="/var/lib/kubelet/pods/5e3e11ca-fc9a-44c6-aa39-3461e24bb47d/volumes" Apr 25 00:01:47.967069 systemd[1]: Started cri-containerd-563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c.scope - libcontainer container 563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c. Apr 25 00:01:48.017956 systemd-networkd[1895]: cali1304506ee53: Gained IPv6LL Apr 25 00:01:48.146523 containerd[1990]: time="2026-04-25T00:01:48.146450751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c48c75d7c-gqbmh,Uid:1f38c29f-9a08-40d5-b03f-00d594455b05,Namespace:calico-system,Attempt:0,} returns sandbox id \"563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c\"" Apr 25 00:01:48.274192 kernel: calico-node[5154]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 25 00:01:49.051042 systemd-networkd[1895]: cali565f5fe350c: Gained IPv6LL Apr 25 00:01:49.068016 containerd[1990]: time="2026-04-25T00:01:48.950867054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 25 00:01:49.178714 containerd[1990]: time="2026-04-25T00:01:49.178653788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:49.187281 containerd[1990]: time="2026-04-25T00:01:49.185867100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.678844636s" Apr 25 00:01:49.187281 containerd[1990]: time="2026-04-25T00:01:49.185937373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 25 00:01:49.266542 containerd[1990]: time="2026-04-25T00:01:49.266465565Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:49.274223 containerd[1990]: time="2026-04-25T00:01:49.274156621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:49.431941 containerd[1990]: time="2026-04-25T00:01:49.431113076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 25 00:01:49.580944 containerd[1990]: time="2026-04-25T00:01:49.580537424Z" level=info msg="CreateContainer within sandbox \"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 25 00:01:49.710279 containerd[1990]: time="2026-04-25T00:01:49.709957182Z" level=info msg="CreateContainer within sandbox \"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b2db875d8b5f9f32b4dac0eef48a865fdc68032e289e022a8436c6ea56d14d6f\"" Apr 25 00:01:49.733290 containerd[1990]: time="2026-04-25T00:01:49.733243443Z" level=info msg="StartContainer for \"b2db875d8b5f9f32b4dac0eef48a865fdc68032e289e022a8436c6ea56d14d6f\"" Apr 25 00:01:49.839154 systemd-networkd[1895]: vxlan.calico: Link UP Apr 25 00:01:49.840514 systemd-networkd[1895]: vxlan.calico: Gained carrier Apr 25 00:01:49.956077 systemd[1]: Started cri-containerd-b2db875d8b5f9f32b4dac0eef48a865fdc68032e289e022a8436c6ea56d14d6f.scope - libcontainer container b2db875d8b5f9f32b4dac0eef48a865fdc68032e289e022a8436c6ea56d14d6f. Apr 25 00:01:50.103390 containerd[1990]: time="2026-04-25T00:01:50.099870758Z" level=info msg="StartContainer for \"b2db875d8b5f9f32b4dac0eef48a865fdc68032e289e022a8436c6ea56d14d6f\" returns successfully" Apr 25 00:01:51.472495 systemd-networkd[1895]: vxlan.calico: Gained IPv6LL Apr 25 00:01:52.346965 containerd[1990]: time="2026-04-25T00:01:52.346906329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:52.349154 containerd[1990]: time="2026-04-25T00:01:52.348945672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 25 00:01:52.352121 containerd[1990]: time="2026-04-25T00:01:52.351612352Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:52.355902 containerd[1990]: time="2026-04-25T00:01:52.355859408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:52.356845 containerd[1990]: time="2026-04-25T00:01:52.356775795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.925596803s" Apr 25 00:01:52.356845 containerd[1990]: time="2026-04-25T00:01:52.356841243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 25 00:01:52.361962 containerd[1990]: time="2026-04-25T00:01:52.361928372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 25 00:01:52.412271 containerd[1990]: time="2026-04-25T00:01:52.412225653Z" level=info msg="CreateContainer within sandbox \"d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 25 00:01:52.447529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414306232.mount: Deactivated successfully. Apr 25 00:01:52.492729 containerd[1990]: time="2026-04-25T00:01:52.492682156Z" level=info msg="CreateContainer within sandbox \"d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8379b11038cc6d2c598eeaffd03d7ae641456d07418058b37bd8731574e70377\"" Apr 25 00:01:52.493978 containerd[1990]: time="2026-04-25T00:01:52.493940266Z" level=info msg="StartContainer for \"8379b11038cc6d2c598eeaffd03d7ae641456d07418058b37bd8731574e70377\"" Apr 25 00:01:52.548093 systemd[1]: Started cri-containerd-8379b11038cc6d2c598eeaffd03d7ae641456d07418058b37bd8731574e70377.scope - libcontainer container 8379b11038cc6d2c598eeaffd03d7ae641456d07418058b37bd8731574e70377. Apr 25 00:01:52.607523 containerd[1990]: time="2026-04-25T00:01:52.607385575Z" level=info msg="StartContainer for \"8379b11038cc6d2c598eeaffd03d7ae641456d07418058b37bd8731574e70377\" returns successfully" Apr 25 00:01:52.655468 kubelet[3201]: I0425 00:01:52.651439 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-577c9d7cc5-qb9xm" podStartSLOduration=28.954920851 podStartE2EDuration="34.643048412s" podCreationTimestamp="2026-04-25 00:01:18 +0000 UTC" firstStartedPulling="2026-04-25 00:01:46.673266977 +0000 UTC m=+50.960144464" lastFinishedPulling="2026-04-25 00:01:52.361394526 +0000 UTC m=+56.648272025" observedRunningTime="2026-04-25 00:01:52.642206269 +0000 UTC m=+56.929083776" watchObservedRunningTime="2026-04-25 00:01:52.643048412 +0000 UTC m=+56.929925917" Apr 25 00:01:53.486704 ntpd[1951]: Listen normally on 8 vxlan.calico 192.168.100.0:123 Apr 25 00:01:53.487049 ntpd[1951]: Listen normally on 9 cali4e0d6c572db [fe80::ecee:eeff:feee:eeee%4]:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 8 vxlan.calico 192.168.100.0:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 9 cali4e0d6c572db [fe80::ecee:eeff:feee:eeee%4]:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 10 calic3c969b68b3 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 11 cali1304506ee53 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 12 calidd0fd434e65 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 13 cali565f5fe350c [fe80::ecee:eeff:feee:eeee%8]:123 Apr 25 00:01:53.491922 ntpd[1951]: 25 Apr 00:01:53 ntpd[1951]: Listen normally on 14 vxlan.calico [fe80::64ae:c1ff:fe54:28ce%9]:123 Apr 25 00:01:53.487116 ntpd[1951]: Listen normally on 10 calic3c969b68b3 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 25 00:01:53.487156 ntpd[1951]: Listen normally on 11 cali1304506ee53 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 25 00:01:53.487193 ntpd[1951]: Listen normally on 12 calidd0fd434e65 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 25 00:01:53.487229 ntpd[1951]: Listen normally on 13 cali565f5fe350c [fe80::ecee:eeff:feee:eeee%8]:123 Apr 25 00:01:53.487274 ntpd[1951]: Listen normally on 14 vxlan.calico [fe80::64ae:c1ff:fe54:28ce%9]:123 Apr 25 00:01:53.798690 containerd[1990]: time="2026-04-25T00:01:53.798543301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:53.800395 containerd[1990]: time="2026-04-25T00:01:53.800100800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 25 00:01:53.802719 containerd[1990]: time="2026-04-25T00:01:53.802396003Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:53.806438 containerd[1990]: time="2026-04-25T00:01:53.806393037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:53.807219 containerd[1990]: time="2026-04-25T00:01:53.807175813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.44496299s" Apr 25 00:01:53.807331 containerd[1990]: time="2026-04-25T00:01:53.807223980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 25 00:01:53.808841 containerd[1990]: time="2026-04-25T00:01:53.808628439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 25 00:01:53.814797 containerd[1990]: time="2026-04-25T00:01:53.814757950Z" level=info msg="CreateContainer within sandbox \"563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 25 00:01:53.845146 containerd[1990]: time="2026-04-25T00:01:53.845098221Z" level=info msg="CreateContainer within sandbox \"563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0e0db1389a8883ee4710afbe471a84b9f8e50b3b4da5d9e931ead833abe74f54\"" Apr 25 00:01:53.850160 containerd[1990]: time="2026-04-25T00:01:53.846999552Z" level=info msg="StartContainer for \"0e0db1389a8883ee4710afbe471a84b9f8e50b3b4da5d9e931ead833abe74f54\"" Apr 25 00:01:53.913065 systemd[1]: Started cri-containerd-0e0db1389a8883ee4710afbe471a84b9f8e50b3b4da5d9e931ead833abe74f54.scope - libcontainer container 0e0db1389a8883ee4710afbe471a84b9f8e50b3b4da5d9e931ead833abe74f54. Apr 25 00:01:53.973145 containerd[1990]: time="2026-04-25T00:01:53.973078516Z" level=info msg="StartContainer for \"0e0db1389a8883ee4710afbe471a84b9f8e50b3b4da5d9e931ead833abe74f54\" returns successfully" Apr 25 00:01:55.465501 containerd[1990]: time="2026-04-25T00:01:55.464838840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:55.467799 containerd[1990]: time="2026-04-25T00:01:55.467536715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 25 00:01:55.470655 containerd[1990]: time="2026-04-25T00:01:55.470141935Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:55.474467 containerd[1990]: time="2026-04-25T00:01:55.474423147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:55.475437 containerd[1990]: time="2026-04-25T00:01:55.475390840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.666707527s" Apr 25 00:01:55.475556 containerd[1990]: time="2026-04-25T00:01:55.475438813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 25 00:01:55.477832 containerd[1990]: time="2026-04-25T00:01:55.477780507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 25 00:01:55.484412 containerd[1990]: time="2026-04-25T00:01:55.484367839Z" level=info msg="CreateContainer within sandbox \"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 25 00:01:55.516202 containerd[1990]: time="2026-04-25T00:01:55.516147740Z" level=info msg="CreateContainer within sandbox \"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"60c4cb785b873341fbf5229094b4033bd587bb04f066c1f7a221bb8761f0faea\"" Apr 25 00:01:55.517551 containerd[1990]: time="2026-04-25T00:01:55.517463288Z" level=info msg="StartContainer for \"60c4cb785b873341fbf5229094b4033bd587bb04f066c1f7a221bb8761f0faea\"" Apr 25 00:01:55.663370 systemd[1]: Started cri-containerd-60c4cb785b873341fbf5229094b4033bd587bb04f066c1f7a221bb8761f0faea.scope - libcontainer container 60c4cb785b873341fbf5229094b4033bd587bb04f066c1f7a221bb8761f0faea. Apr 25 00:01:55.810536 containerd[1990]: time="2026-04-25T00:01:55.810490851Z" level=info msg="StartContainer for \"60c4cb785b873341fbf5229094b4033bd587bb04f066c1f7a221bb8761f0faea\" returns successfully" Apr 25 00:01:56.047318 systemd[1]: Started sshd@7-172.31.27.158:22-4.175.71.9:35414.service - OpenSSH per-connection server daemon (4.175.71.9:35414). Apr 25 00:01:56.080877 containerd[1990]: time="2026-04-25T00:01:56.080826645Z" level=info msg="StopPodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\"" Apr 25 00:01:56.179766 containerd[1990]: time="2026-04-25T00:01:56.179721883Z" level=info msg="StopPodSandbox for \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\"" Apr 25 00:01:56.391427 kubelet[3201]: I0425 00:01:56.387782 3201 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 25 00:01:56.391427 kubelet[3201]: I0425 00:01:56.391276 3201 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 25 00:01:56.716752 kubelet[3201]: I0425 00:01:56.716280 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8m4mt" podStartSLOduration=29.728873543 podStartE2EDuration="38.716254312s" podCreationTimestamp="2026-04-25 00:01:18 +0000 UTC" firstStartedPulling="2026-04-25 00:01:46.489596101 +0000 UTC m=+50.776473596" lastFinishedPulling="2026-04-25 00:01:55.47697688 +0000 UTC m=+59.763854365" observedRunningTime="2026-04-25 00:01:56.715436261 +0000 UTC m=+61.002313746" watchObservedRunningTime="2026-04-25 00:01:56.716254312 +0000 UTC m=+61.003131881" Apr 25 00:01:56.903207 containerd[1990]: time="2026-04-25T00:01:56.902976406Z" level=info msg="StopPodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\"" Apr 25 00:01:57.167788 sshd[5639]: Accepted publickey for core from 4.175.71.9 port 35414 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:57.174946 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:57.190122 systemd-logind[1965]: New session 8 of user core. Apr 25 00:01:57.195302 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:56.773 [WARNING][5658] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91a98ef7-481a-4c28-830d-88f976ac72ee", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43", Pod:"coredns-674b8bbfcf-fj8v9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c969b68b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:56.781 [INFO][5658] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:56.781 [INFO][5658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" iface="eth0" netns="" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:56.781 [INFO][5658] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:56.781 [INFO][5658] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.187 [INFO][5675] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.194 [INFO][5675] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.194 [INFO][5675] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.231 [WARNING][5675] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.231 [INFO][5675] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.235 [INFO][5675] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:57.247182 containerd[1990]: 2026-04-25 00:01:57.244 [INFO][5658] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.250575 containerd[1990]: time="2026-04-25T00:01:57.247280277Z" level=info msg="TearDown network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\" successfully" Apr 25 00:01:57.250575 containerd[1990]: time="2026-04-25T00:01:57.247318202Z" level=info msg="StopPodSandbox for \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\" returns successfully" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.024 [INFO][5693] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.027 [INFO][5693] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" iface="eth0" netns="/var/run/netns/cni-a3f9b5c2-2cb4-7a07-407f-4002e4f684f7" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.028 [INFO][5693] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" iface="eth0" netns="/var/run/netns/cni-a3f9b5c2-2cb4-7a07-407f-4002e4f684f7" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.028 [INFO][5693] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" iface="eth0" netns="/var/run/netns/cni-a3f9b5c2-2cb4-7a07-407f-4002e4f684f7" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.028 [INFO][5693] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.028 [INFO][5693] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.192 [INFO][5700] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.193 [INFO][5700] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.235 [INFO][5700] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.249 [WARNING][5700] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.249 [INFO][5700] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.252 [INFO][5700] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:57.300006 containerd[1990]: 2026-04-25 00:01:57.294 [INFO][5693] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:01:57.302581 containerd[1990]: time="2026-04-25T00:01:57.300825536Z" level=info msg="TearDown network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" successfully" Apr 25 00:01:57.302581 containerd[1990]: time="2026-04-25T00:01:57.300862279Z" level=info msg="StopPodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" returns successfully" Apr 25 00:01:57.310430 systemd[1]: run-netns-cni\x2da3f9b5c2\x2d2cb4\x2d7a07\x2d407f\x2d4002e4f684f7.mount: Deactivated successfully. Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:56.800 [INFO][5659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:56.801 [INFO][5659] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" iface="eth0" netns="/var/run/netns/cni-3e4ae0fd-aae5-94ea-3431-fdd1de8bcbf7" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:56.801 [INFO][5659] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" iface="eth0" netns="/var/run/netns/cni-3e4ae0fd-aae5-94ea-3431-fdd1de8bcbf7" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:56.805 [INFO][5659] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" iface="eth0" netns="/var/run/netns/cni-3e4ae0fd-aae5-94ea-3431-fdd1de8bcbf7" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:56.805 [INFO][5659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:56.805 [INFO][5659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.187 [INFO][5677] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.201 [INFO][5677] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.252 [INFO][5677] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.318 [WARNING][5677] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.318 [INFO][5677] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.327 [INFO][5677] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:57.361471 containerd[1990]: 2026-04-25 00:01:57.334 [INFO][5659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:01:57.369047 containerd[1990]: time="2026-04-25T00:01:57.369001119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rcpfl,Uid:22e1b7c8-1a20-4649-bf8c-3b2a82e5872a,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:57.372393 containerd[1990]: time="2026-04-25T00:01:57.371106344Z" level=info msg="TearDown network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" successfully" Apr 25 00:01:57.372393 containerd[1990]: time="2026-04-25T00:01:57.371152227Z" level=info msg="StopPodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" returns successfully" Apr 25 00:01:57.377454 systemd[1]: run-netns-cni\x2d3e4ae0fd\x2daae5\x2d94ea\x2d3431\x2dfdd1de8bcbf7.mount: Deactivated successfully. Apr 25 00:01:57.416918 containerd[1990]: time="2026-04-25T00:01:57.416853744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-jntps,Uid:fbf459d9-c3ed-42cc-9f78-25b84022bdb0,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:57.427709 containerd[1990]: time="2026-04-25T00:01:57.427475340Z" level=info msg="RemovePodSandbox for \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\"" Apr 25 00:01:57.427709 containerd[1990]: time="2026-04-25T00:01:57.427529962Z" level=info msg="Forcibly stopping sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\"" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.633 [WARNING][5720] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91a98ef7-481a-4c28-830d-88f976ac72ee", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"234ad4ac6fd8f655a9fa68e02d67d5113799b319ae6de8b81e1e9e4bc9e7fc43", Pod:"coredns-674b8bbfcf-fj8v9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c969b68b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.634 [INFO][5720] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.634 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" iface="eth0" netns="" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.634 [INFO][5720] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.634 [INFO][5720] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.772 [INFO][5749] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.775 [INFO][5749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.775 [INFO][5749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.803 [WARNING][5749] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.803 [INFO][5749] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" HandleID="k8s-pod-network.4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--fj8v9-eth0" Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.809 [INFO][5749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:57.845144 containerd[1990]: 2026-04-25 00:01:57.831 [INFO][5720] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027" Apr 25 00:01:57.845144 containerd[1990]: time="2026-04-25T00:01:57.844300850Z" level=info msg="TearDown network for sandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\" successfully" Apr 25 00:01:57.874120 containerd[1990]: time="2026-04-25T00:01:57.874044194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:57.939503 containerd[1990]: time="2026-04-25T00:01:57.937742821Z" level=info msg="RemovePodSandbox \"4f667756dce6ad3886f7002226d0d3b9226a1a4b4af4177a228393d4732d4027\" returns successfully" Apr 25 00:01:57.939503 containerd[1990]: time="2026-04-25T00:01:57.939092874Z" level=info msg="StopPodSandbox for \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\"" Apr 25 00:01:57.954478 (udev-worker)[5778]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:01:57.969316 systemd-networkd[1895]: cali20080596c14: Link UP Apr 25 00:01:57.974449 systemd-networkd[1895]: cali20080596c14: Gained carrier Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.662 [INFO][5725] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0 calico-apiserver-69c6c7bbcf- calico-system fbf459d9-c3ed-42cc-9f78-25b84022bdb0 1079 0 2026-04-25 00:01:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69c6c7bbcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-158 calico-apiserver-69c6c7bbcf-jntps eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali20080596c14 [] [] }} ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.663 [INFO][5725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.781 [INFO][5755] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" HandleID="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.820 [INFO][5755] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" HandleID="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-158", "pod":"calico-apiserver-69c6c7bbcf-jntps", "timestamp":"2026-04-25 00:01:57.78134546 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188580)} Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.820 [INFO][5755] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.820 [INFO][5755] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.820 [INFO][5755] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.854 [INFO][5755] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.873 [INFO][5755] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.882 [INFO][5755] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.885 [INFO][5755] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.889 [INFO][5755] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.889 [INFO][5755] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.893 [INFO][5755] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2 Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.906 [INFO][5755] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.926 [INFO][5755] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.6/26] block=192.168.100.0/26 handle="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.926 [INFO][5755] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.6/26] handle="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" host="ip-172-31-27-158" Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.927 [INFO][5755] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:58.054192 containerd[1990]: 2026-04-25 00:01:57.927 [INFO][5755] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.6/26] IPv6=[] ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" HandleID="k8s-pod-network.0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.059565 containerd[1990]: 2026-04-25 00:01:57.936 [INFO][5725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"fbf459d9-c3ed-42cc-9f78-25b84022bdb0", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"calico-apiserver-69c6c7bbcf-jntps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20080596c14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:58.059565 containerd[1990]: 2026-04-25 00:01:57.936 [INFO][5725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.6/32] ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.059565 containerd[1990]: 2026-04-25 00:01:57.936 [INFO][5725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20080596c14 ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.059565 containerd[1990]: 2026-04-25 00:01:57.985 [INFO][5725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.059565 containerd[1990]: 2026-04-25 00:01:57.997 [INFO][5725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"fbf459d9-c3ed-42cc-9f78-25b84022bdb0", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2", Pod:"calico-apiserver-69c6c7bbcf-jntps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20080596c14", MAC:"36:9d:cb:2f:c2:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:58.059565 containerd[1990]: 2026-04-25 00:01:58.045 [INFO][5725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-jntps" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:01:58.164116 systemd-networkd[1895]: cali07984bcb261: Link UP Apr 25 00:01:58.172399 systemd-networkd[1895]: cali07984bcb261: Gained carrier Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.669 [INFO][5728] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0 goldmane-5b85766d88- calico-system 22e1b7c8-1a20-4649-bf8c-3b2a82e5872a 1082 0 2026-04-25 00:01:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-27-158 goldmane-5b85766d88-rcpfl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali07984bcb261 [] [] }} ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.670 [INFO][5728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.814 [INFO][5758] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" HandleID="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.834 [INFO][5758] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" HandleID="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001039a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-158", "pod":"goldmane-5b85766d88-rcpfl", "timestamp":"2026-04-25 00:01:57.814577091 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00035b600)} Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.837 [INFO][5758] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.927 [INFO][5758] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.927 [INFO][5758] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.950 [INFO][5758] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:57.985 [INFO][5758] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.033 [INFO][5758] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.036 [INFO][5758] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.066 [INFO][5758] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.068 [INFO][5758] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.072 [INFO][5758] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.085 [INFO][5758] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.102 [INFO][5758] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.7/26] block=192.168.100.0/26 handle="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.102 [INFO][5758] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.7/26] handle="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" host="ip-172-31-27-158" Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.102 [INFO][5758] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:58.243405 containerd[1990]: 2026-04-25 00:01:58.103 [INFO][5758] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.7/26] IPv6=[] ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" HandleID="k8s-pod-network.b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.245049 containerd[1990]: 2026-04-25 00:01:58.121 [INFO][5728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"goldmane-5b85766d88-rcpfl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07984bcb261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:58.245049 containerd[1990]: 2026-04-25 00:01:58.121 [INFO][5728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.7/32] ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.245049 containerd[1990]: 2026-04-25 00:01:58.126 [INFO][5728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07984bcb261 ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.245049 containerd[1990]: 2026-04-25 00:01:58.176 [INFO][5728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.245049 containerd[1990]: 2026-04-25 00:01:58.184 [INFO][5728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a", Pod:"goldmane-5b85766d88-rcpfl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07984bcb261", MAC:"aa:ef:0e:b3:cc:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:58.245049 containerd[1990]: 2026-04-25 00:01:58.237 [INFO][5728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a" Namespace="calico-system" Pod="goldmane-5b85766d88-rcpfl" WorkloadEndpoint="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:01:58.354040 containerd[1990]: time="2026-04-25T00:01:58.345225383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:58.354040 containerd[1990]: time="2026-04-25T00:01:58.352014727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:58.354040 containerd[1990]: time="2026-04-25T00:01:58.352036393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:58.354040 containerd[1990]: time="2026-04-25T00:01:58.352213309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:58.442898 containerd[1990]: time="2026-04-25T00:01:58.438783141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:58.442898 containerd[1990]: time="2026-04-25T00:01:58.439338703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:58.442898 containerd[1990]: time="2026-04-25T00:01:58.439490005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:58.450526 containerd[1990]: time="2026-04-25T00:01:58.446385136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:58.535723 systemd[1]: Started cri-containerd-0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2.scope - libcontainer container 0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2. Apr 25 00:01:58.570075 systemd[1]: Started cri-containerd-b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a.scope - libcontainer container b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a. Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.271 [WARNING][5789] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090", Pod:"csi-node-driver-8m4mt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e0d6c572db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.271 [INFO][5789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.271 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" iface="eth0" netns="" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.271 [INFO][5789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.271 [INFO][5789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.499 [INFO][5816] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.506 [INFO][5816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.507 [INFO][5816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.532 [WARNING][5816] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.532 [INFO][5816] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.539 [INFO][5816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:58.601798 containerd[1990]: 2026-04-25 00:01:58.574 [INFO][5789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:58.603552 containerd[1990]: time="2026-04-25T00:01:58.603440828Z" level=info msg="TearDown network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\" successfully" Apr 25 00:01:58.606688 containerd[1990]: time="2026-04-25T00:01:58.605912277Z" level=info msg="StopPodSandbox for \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\" returns successfully" Apr 25 00:01:58.614483 containerd[1990]: time="2026-04-25T00:01:58.614308314Z" level=info msg="RemovePodSandbox for \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\"" Apr 25 00:01:58.614483 containerd[1990]: time="2026-04-25T00:01:58.614361056Z" level=info msg="Forcibly stopping sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\"" Apr 25 00:01:58.754632 containerd[1990]: time="2026-04-25T00:01:58.754362683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-jntps,Uid:fbf459d9-c3ed-42cc-9f78-25b84022bdb0,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2\"" Apr 25 00:01:58.864827 containerd[1990]: time="2026-04-25T00:01:58.864708628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rcpfl,Uid:22e1b7c8-1a20-4649-bf8c-3b2a82e5872a,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a\"" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:58.877 [WARNING][5908] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b2f1eba8-430b-4eb5-88b7-fcf647e52b8e", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"c98103ac2c682c1ead40fd35b61796269df0da01745057e081645829bd18f090", Pod:"csi-node-driver-8m4mt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e0d6c572db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:58.880 [INFO][5908] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:58.880 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" iface="eth0" netns="" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:58.880 [INFO][5908] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:58.880 [INFO][5908] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.000 [INFO][5932] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.000 [INFO][5932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.000 [INFO][5932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.014 [WARNING][5932] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.014 [INFO][5932] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" HandleID="k8s-pod-network.6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Workload="ip--172--31--27--158-k8s-csi--node--driver--8m4mt-eth0" Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.018 [INFO][5932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:59.041251 containerd[1990]: 2026-04-25 00:01:59.028 [INFO][5908] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842" Apr 25 00:01:59.041251 containerd[1990]: time="2026-04-25T00:01:59.040902040Z" level=info msg="TearDown network for sandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\" successfully" Apr 25 00:01:59.052784 containerd[1990]: time="2026-04-25T00:01:59.052618948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:59.053586 containerd[1990]: time="2026-04-25T00:01:59.052821304Z" level=info msg="RemovePodSandbox \"6dee393454143d00d65e01ab944f7727e4467ed317391bde624af236ea87f842\" returns successfully" Apr 25 00:01:59.055062 containerd[1990]: time="2026-04-25T00:01:59.054399380Z" level=info msg="StopPodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\"" Apr 25 00:01:59.255432 sshd[5639]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:59.266338 systemd[1]: sshd@7-172.31.27.158:22-4.175.71.9:35414.service: Deactivated successfully. Apr 25 00:01:59.269113 systemd[1]: session-8.scope: Deactivated successfully. Apr 25 00:01:59.271368 systemd-logind[1965]: Session 8 logged out. Waiting for processes to exit. Apr 25 00:01:59.274839 systemd-logind[1965]: Removed session 8. Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.180 [WARNING][5949] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.181 [INFO][5949] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.181 [INFO][5949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" iface="eth0" netns="" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.181 [INFO][5949] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.181 [INFO][5949] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.240 [INFO][5956] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.241 [INFO][5956] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.241 [INFO][5956] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.251 [WARNING][5956] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.251 [INFO][5956] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.254 [INFO][5956] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:59.280272 containerd[1990]: 2026-04-25 00:01:59.275 [INFO][5949] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.281083 containerd[1990]: time="2026-04-25T00:01:59.280324804Z" level=info msg="TearDown network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" successfully" Apr 25 00:01:59.281083 containerd[1990]: time="2026-04-25T00:01:59.280358535Z" level=info msg="StopPodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" returns successfully" Apr 25 00:01:59.281083 containerd[1990]: time="2026-04-25T00:01:59.280948255Z" level=info msg="RemovePodSandbox for \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\"" Apr 25 00:01:59.281083 containerd[1990]: time="2026-04-25T00:01:59.280981942Z" level=info msg="Forcibly stopping sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\"" Apr 25 00:01:59.408092 systemd-networkd[1895]: cali20080596c14: Gained IPv6LL Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.379 [WARNING][5972] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" WorkloadEndpoint="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.379 [INFO][5972] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.379 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" iface="eth0" netns="" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.379 [INFO][5972] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.379 [INFO][5972] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.460 [INFO][5979] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.461 [INFO][5979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.461 [INFO][5979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.481 [WARNING][5979] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.482 [INFO][5979] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" HandleID="k8s-pod-network.bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Workload="ip--172--31--27--158-k8s-whisker--fb5969844--jxrhx-eth0" Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.485 [INFO][5979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:59.494577 containerd[1990]: 2026-04-25 00:01:59.488 [INFO][5972] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8" Apr 25 00:01:59.495770 containerd[1990]: time="2026-04-25T00:01:59.494623611Z" level=info msg="TearDown network for sandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" successfully" Apr 25 00:01:59.503745 containerd[1990]: time="2026-04-25T00:01:59.503682987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:59.504053 containerd[1990]: time="2026-04-25T00:01:59.504012223Z" level=info msg="RemovePodSandbox \"bd84c4a9dec8687203f755a2fb6f8b4e5ef4f96069334227b725b16147e04ce8\" returns successfully" Apr 25 00:01:59.504837 containerd[1990]: time="2026-04-25T00:01:59.504770938Z" level=info msg="StopPodSandbox for \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\"" Apr 25 00:01:59.664675 systemd-networkd[1895]: cali07984bcb261: Gained IPv6LL Apr 25 00:01:59.713932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664841331.mount: Deactivated successfully. Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.589 [WARNING][5998] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0", GenerateName:"calico-kube-controllers-577c9d7cc5-", Namespace:"calico-system", SelfLink:"", UID:"fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577c9d7cc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed", Pod:"calico-kube-controllers-577c9d7cc5-qb9xm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1304506ee53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.591 [INFO][5998] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.591 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" iface="eth0" netns="" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.591 [INFO][5998] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.591 [INFO][5998] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.726 [INFO][6005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.726 [INFO][6005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.727 [INFO][6005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.735 [WARNING][6005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.735 [INFO][6005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.739 [INFO][6005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:59.749893 containerd[1990]: 2026-04-25 00:01:59.745 [INFO][5998] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.751928 containerd[1990]: time="2026-04-25T00:01:59.751885269Z" level=info msg="TearDown network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\" successfully" Apr 25 00:01:59.751928 containerd[1990]: time="2026-04-25T00:01:59.751925681Z" level=info msg="StopPodSandbox for \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\" returns successfully" Apr 25 00:01:59.752995 containerd[1990]: time="2026-04-25T00:01:59.752518993Z" level=info msg="RemovePodSandbox for \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\"" Apr 25 00:01:59.752995 containerd[1990]: time="2026-04-25T00:01:59.752572458Z" level=info msg="Forcibly stopping sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\"" Apr 25 00:01:59.768217 containerd[1990]: time="2026-04-25T00:01:59.768158984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:59.783834 containerd[1990]: time="2026-04-25T00:01:59.783253879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 25 00:01:59.841596 containerd[1990]: time="2026-04-25T00:01:59.840783729Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:59.862696 containerd[1990]: time="2026-04-25T00:01:59.862454461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:59.865580 containerd[1990]: time="2026-04-25T00:01:59.864795858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 4.38618068s" Apr 25 00:01:59.865580 containerd[1990]: time="2026-04-25T00:01:59.864866183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 25 00:01:59.890000 containerd[1990]: time="2026-04-25T00:01:59.888511072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.856 [WARNING][6027] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0", GenerateName:"calico-kube-controllers-577c9d7cc5-", Namespace:"calico-system", SelfLink:"", UID:"fd5cb78e-5eeb-47d8-bac1-f83b8ac68c9f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577c9d7cc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"d953fbfe5d474b69eaba4d40d449ed7d036769fb5b7da4fe0510442bd48012ed", Pod:"calico-kube-controllers-577c9d7cc5-qb9xm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1304506ee53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.856 [INFO][6027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.856 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" iface="eth0" netns="" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.858 [INFO][6027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.858 [INFO][6027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.940 [INFO][6035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.940 [INFO][6035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.940 [INFO][6035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.954 [WARNING][6035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.954 [INFO][6035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" HandleID="k8s-pod-network.f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Workload="ip--172--31--27--158-k8s-calico--kube--controllers--577c9d7cc5--qb9xm-eth0" Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.957 [INFO][6035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:59.964675 containerd[1990]: 2026-04-25 00:01:59.959 [INFO][6027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0" Apr 25 00:01:59.964675 containerd[1990]: time="2026-04-25T00:01:59.963906109Z" level=info msg="TearDown network for sandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\" successfully" Apr 25 00:01:59.975138 containerd[1990]: time="2026-04-25T00:01:59.974912419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:59.975138 containerd[1990]: time="2026-04-25T00:01:59.975011362Z" level=info msg="RemovePodSandbox \"f7641c696a5f158ed68ecae52cccd73b06ec1bd8ea9b4482f8cf9a1f731a5ee0\" returns successfully" Apr 25 00:02:00.061674 containerd[1990]: time="2026-04-25T00:02:00.056294022Z" level=info msg="StopPodSandbox for \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\"" Apr 25 00:02:00.232922 containerd[1990]: time="2026-04-25T00:02:00.232196340Z" level=info msg="CreateContainer within sandbox \"563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 25 00:02:00.235145 containerd[1990]: time="2026-04-25T00:02:00.235101239Z" level=info msg="StopPodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\"" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.204 [WARNING][6049] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc", Pod:"coredns-674b8bbfcf-7qhnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd0fd434e65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.205 [INFO][6049] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.205 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" iface="eth0" netns="" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.205 [INFO][6049] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.205 [INFO][6049] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.273 [INFO][6056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.273 [INFO][6056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.273 [INFO][6056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.291 [WARNING][6056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.291 [INFO][6056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.294 [INFO][6056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:02:00.305237 containerd[1990]: 2026-04-25 00:02:00.300 [INFO][6049] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.306038 containerd[1990]: time="2026-04-25T00:02:00.305245926Z" level=info msg="TearDown network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\" successfully" Apr 25 00:02:00.306038 containerd[1990]: time="2026-04-25T00:02:00.305277788Z" level=info msg="StopPodSandbox for \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\" returns successfully" Apr 25 00:02:00.306131 containerd[1990]: time="2026-04-25T00:02:00.306035284Z" level=info msg="RemovePodSandbox for \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\"" Apr 25 00:02:00.307637 containerd[1990]: time="2026-04-25T00:02:00.306940388Z" level=info msg="Forcibly stopping sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\"" Apr 25 00:02:00.353383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781672097.mount: Deactivated successfully. Apr 25 00:02:00.361915 containerd[1990]: time="2026-04-25T00:02:00.361862900Z" level=info msg="CreateContainer within sandbox \"563fd08ced390cc2818facb89f19c3e2549a9cf1fd3f62268d6ee6c830acc85c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ef488c44e29ff62c235cce276c3691f78798587b7c9a0e59ff553c582e18c623\"" Apr 25 00:02:00.421068 containerd[1990]: time="2026-04-25T00:02:00.405757279Z" level=info msg="StartContainer for \"ef488c44e29ff62c235cce276c3691f78798587b7c9a0e59ff553c582e18c623\"" Apr 25 00:02:00.652336 systemd[1]: Started cri-containerd-ef488c44e29ff62c235cce276c3691f78798587b7c9a0e59ff553c582e18c623.scope - libcontainer container ef488c44e29ff62c235cce276c3691f78798587b7c9a0e59ff553c582e18c623. Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.501 [WARNING][6086] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e2dc8fd3-53ee-4b31-8d4f-cbcd7d64f683", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"5d28358b93cd623bca7baaef48f9437b6d3a894f0bac1671ba0a7e7c3d77c7dc", Pod:"coredns-674b8bbfcf-7qhnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd0fd434e65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.506 [INFO][6086] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.506 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" iface="eth0" netns="" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.506 [INFO][6086] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.506 [INFO][6086] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.592 [INFO][6098] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.592 [INFO][6098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.592 [INFO][6098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.608 [WARNING][6098] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.608 [INFO][6098] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" HandleID="k8s-pod-network.2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Workload="ip--172--31--27--158-k8s-coredns--674b8bbfcf--7qhnl-eth0" Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.612 [INFO][6098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:02:00.660909 containerd[1990]: 2026-04-25 00:02:00.646 [INFO][6086] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b" Apr 25 00:02:00.663597 containerd[1990]: time="2026-04-25T00:02:00.662475795Z" level=info msg="TearDown network for sandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\" successfully" Apr 25 00:02:00.683283 containerd[1990]: time="2026-04-25T00:02:00.683131142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:02:00.685596 containerd[1990]: time="2026-04-25T00:02:00.685301799Z" level=info msg="RemovePodSandbox \"2f38bbb2480ea12f3469cef594a8289b35cc1ff875b665231f9722dbe73ee97b\" returns successfully" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.469 [INFO][6071] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.479 [INFO][6071] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" iface="eth0" netns="/var/run/netns/cni-a40a8740-f70e-d7af-d877-3a5bca5f98a5" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.487 [INFO][6071] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" iface="eth0" netns="/var/run/netns/cni-a40a8740-f70e-d7af-d877-3a5bca5f98a5" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.491 [INFO][6071] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" iface="eth0" netns="/var/run/netns/cni-a40a8740-f70e-d7af-d877-3a5bca5f98a5" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.491 [INFO][6071] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.492 [INFO][6071] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.599 [INFO][6095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.601 [INFO][6095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.613 [INFO][6095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.665 [WARNING][6095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.665 [INFO][6095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.674 [INFO][6095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:02:00.689034 containerd[1990]: 2026-04-25 00:02:00.682 [INFO][6071] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:02:00.693876 containerd[1990]: time="2026-04-25T00:02:00.693256956Z" level=info msg="TearDown network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" successfully" Apr 25 00:02:00.693876 containerd[1990]: time="2026-04-25T00:02:00.693301027Z" level=info msg="StopPodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" returns successfully" Apr 25 00:02:00.699013 systemd[1]: run-netns-cni\x2da40a8740\x2df70e\x2dd7af\x2dd877\x2d3a5bca5f98a5.mount: Deactivated successfully. Apr 25 00:02:00.719249 containerd[1990]: time="2026-04-25T00:02:00.716477168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-8gdvg,Uid:04413392-8f1c-4eff-8af3-8c2e64b92e0c,Namespace:calico-system,Attempt:1,}" Apr 25 00:02:00.822230 containerd[1990]: time="2026-04-25T00:02:00.822021957Z" level=info msg="StartContainer for \"ef488c44e29ff62c235cce276c3691f78798587b7c9a0e59ff553c582e18c623\" returns successfully" Apr 25 00:02:01.025726 systemd-networkd[1895]: cali65405d83ee5: Link UP Apr 25 00:02:01.028131 systemd-networkd[1895]: cali65405d83ee5: Gained carrier Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.855 [INFO][6144] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0 calico-apiserver-69c6c7bbcf- calico-system 04413392-8f1c-4eff-8af3-8c2e64b92e0c 1110 0 2026-04-25 00:01:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69c6c7bbcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-158 calico-apiserver-69c6c7bbcf-8gdvg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali65405d83ee5 [] [] }} ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.855 [INFO][6144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.929 [INFO][6167] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" HandleID="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.945 [INFO][6167] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" HandleID="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-158", "pod":"calico-apiserver-69c6c7bbcf-8gdvg", "timestamp":"2026-04-25 00:02:00.929930673 +0000 UTC"}, Hostname:"ip-172-31-27-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000113b80)} Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.945 [INFO][6167] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.945 [INFO][6167] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.945 [INFO][6167] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-158' Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.949 [INFO][6167] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.959 [INFO][6167] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.970 [INFO][6167] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.977 [INFO][6167] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.985 [INFO][6167] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.986 [INFO][6167] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.988 [INFO][6167] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08 Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:00.997 [INFO][6167] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:01.012 [INFO][6167] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.8/26] block=192.168.100.0/26 handle="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:01.012 [INFO][6167] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.8/26] handle="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" host="ip-172-31-27-158" Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:01.012 [INFO][6167] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:02:01.087362 containerd[1990]: 2026-04-25 00:02:01.012 [INFO][6167] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.8/26] IPv6=[] ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" HandleID="k8s-pod-network.eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.090202 containerd[1990]: 2026-04-25 00:02:01.019 [INFO][6144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"04413392-8f1c-4eff-8af3-8c2e64b92e0c", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"", Pod:"calico-apiserver-69c6c7bbcf-8gdvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65405d83ee5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:02:01.090202 containerd[1990]: 2026-04-25 00:02:01.020 [INFO][6144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.8/32] ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.090202 containerd[1990]: 2026-04-25 00:02:01.020 [INFO][6144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65405d83ee5 ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.090202 containerd[1990]: 2026-04-25 00:02:01.027 [INFO][6144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.090202 containerd[1990]: 2026-04-25 00:02:01.031 [INFO][6144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"04413392-8f1c-4eff-8af3-8c2e64b92e0c", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08", Pod:"calico-apiserver-69c6c7bbcf-8gdvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65405d83ee5", MAC:"de:77:e7:83:3d:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:02:01.090202 containerd[1990]: 2026-04-25 00:02:01.062 [INFO][6144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08" Namespace="calico-system" Pod="calico-apiserver-69c6c7bbcf-8gdvg" WorkloadEndpoint="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:02:01.116063 kubelet[3201]: I0425 00:02:01.094377 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-c48c75d7c-gqbmh" podStartSLOduration=3.334437725 podStartE2EDuration="15.060870479s" podCreationTimestamp="2026-04-25 00:01:46 +0000 UTC" firstStartedPulling="2026-04-25 00:01:48.157845793 +0000 UTC m=+52.444723292" lastFinishedPulling="2026-04-25 00:01:59.884278544 +0000 UTC m=+64.171156046" observedRunningTime="2026-04-25 00:02:00.945116974 +0000 UTC m=+65.231994480" watchObservedRunningTime="2026-04-25 00:02:01.060870479 +0000 UTC m=+65.347747984" Apr 25 00:02:01.160665 containerd[1990]: time="2026-04-25T00:02:01.160486953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:02:01.161635 containerd[1990]: time="2026-04-25T00:02:01.160630592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:02:01.161635 containerd[1990]: time="2026-04-25T00:02:01.160659441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:02:01.161635 containerd[1990]: time="2026-04-25T00:02:01.160890777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:02:01.200139 systemd[1]: Started cri-containerd-eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08.scope - libcontainer container eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08. Apr 25 00:02:01.277829 containerd[1990]: time="2026-04-25T00:02:01.276510071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c6c7bbcf-8gdvg,Uid:04413392-8f1c-4eff-8af3-8c2e64b92e0c,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08\"" Apr 25 00:02:03.068623 systemd-networkd[1895]: cali65405d83ee5: Gained IPv6LL Apr 25 00:02:04.444221 systemd[1]: Started sshd@8-172.31.27.158:22-4.175.71.9:35428.service - OpenSSH per-connection server daemon (4.175.71.9:35428). Apr 25 00:02:05.347315 containerd[1990]: time="2026-04-25T00:02:05.347252411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:05.351093 containerd[1990]: time="2026-04-25T00:02:05.351019400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 25 00:02:05.375878 containerd[1990]: time="2026-04-25T00:02:05.375790895Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:05.380851 containerd[1990]: time="2026-04-25T00:02:05.379836290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:05.381209 containerd[1990]: time="2026-04-25T00:02:05.381170209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.492612037s" Apr 25 00:02:05.381328 containerd[1990]: time="2026-04-25T00:02:05.381310978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 25 00:02:05.406669 containerd[1990]: time="2026-04-25T00:02:05.406602208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 25 00:02:05.497291 ntpd[1951]: Listen normally on 15 cali20080596c14 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 25 00:02:05.499039 ntpd[1951]: 25 Apr 00:02:05 ntpd[1951]: Listen normally on 15 cali20080596c14 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 25 00:02:05.499039 ntpd[1951]: 25 Apr 00:02:05 ntpd[1951]: Listen normally on 16 cali07984bcb261 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 25 00:02:05.499039 ntpd[1951]: 25 Apr 00:02:05 ntpd[1951]: Listen normally on 17 cali65405d83ee5 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 25 00:02:05.497380 ntpd[1951]: Listen normally on 16 cali07984bcb261 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 25 00:02:05.497425 ntpd[1951]: Listen normally on 17 cali65405d83ee5 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 25 00:02:05.546687 sshd[6245]: Accepted publickey for core from 4.175.71.9 port 35428 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:05.556155 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:05.568382 systemd-logind[1965]: New session 9 of user core. Apr 25 00:02:05.573286 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 25 00:02:05.636665 containerd[1990]: time="2026-04-25T00:02:05.636540067Z" level=info msg="CreateContainer within sandbox \"0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 00:02:05.694671 containerd[1990]: time="2026-04-25T00:02:05.694602802Z" level=info msg="CreateContainer within sandbox \"0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5442a5d94a7385f5f1976bca76f0473bb43de47cc56a519b9eebbf4f1ae6a94b\"" Apr 25 00:02:05.703899 containerd[1990]: time="2026-04-25T00:02:05.702541997Z" level=info msg="StartContainer for \"5442a5d94a7385f5f1976bca76f0473bb43de47cc56a519b9eebbf4f1ae6a94b\"" Apr 25 00:02:05.817066 systemd[1]: Started cri-containerd-5442a5d94a7385f5f1976bca76f0473bb43de47cc56a519b9eebbf4f1ae6a94b.scope - libcontainer container 5442a5d94a7385f5f1976bca76f0473bb43de47cc56a519b9eebbf4f1ae6a94b. Apr 25 00:02:05.892586 containerd[1990]: time="2026-04-25T00:02:05.892463043Z" level=info msg="StartContainer for \"5442a5d94a7385f5f1976bca76f0473bb43de47cc56a519b9eebbf4f1ae6a94b\" returns successfully" Apr 25 00:02:07.344553 kubelet[3201]: I0425 00:02:07.344471 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-69c6c7bbcf-jntps" podStartSLOduration=44.777573129 podStartE2EDuration="51.344358529s" podCreationTimestamp="2026-04-25 00:01:16 +0000 UTC" firstStartedPulling="2026-04-25 00:01:58.840014789 +0000 UTC m=+63.126892274" lastFinishedPulling="2026-04-25 00:02:05.406800179 +0000 UTC m=+69.693677674" observedRunningTime="2026-04-25 00:02:07.337661132 +0000 UTC m=+71.624538628" watchObservedRunningTime="2026-04-25 00:02:07.344358529 +0000 UTC m=+71.631236035" Apr 25 00:02:07.368128 sshd[6245]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:07.391779 systemd[1]: sshd@8-172.31.27.158:22-4.175.71.9:35428.service: Deactivated successfully. Apr 25 00:02:07.401386 systemd[1]: session-9.scope: Deactivated successfully. Apr 25 00:02:07.404639 systemd-logind[1965]: Session 9 logged out. Waiting for processes to exit. Apr 25 00:02:07.410522 systemd-logind[1965]: Removed session 9. Apr 25 00:02:08.681850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901743180.mount: Deactivated successfully. Apr 25 00:02:09.416790 containerd[1990]: time="2026-04-25T00:02:09.416736586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:09.419189 containerd[1990]: time="2026-04-25T00:02:09.419116640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 25 00:02:09.421295 containerd[1990]: time="2026-04-25T00:02:09.419928903Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:09.422785 containerd[1990]: time="2026-04-25T00:02:09.422745915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:09.423796 containerd[1990]: time="2026-04-25T00:02:09.423755426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.016843845s" Apr 25 00:02:09.424016 containerd[1990]: time="2026-04-25T00:02:09.423990481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 25 00:02:09.425267 containerd[1990]: time="2026-04-25T00:02:09.425240170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 00:02:09.431251 containerd[1990]: time="2026-04-25T00:02:09.431204812Z" level=info msg="CreateContainer within sandbox \"b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 25 00:02:09.465848 containerd[1990]: time="2026-04-25T00:02:09.463753551Z" level=info msg="CreateContainer within sandbox \"b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"77a4b448799b527cbb103fa42356ff23175b2cf774a4761ec27120c8fc0bcf09\"" Apr 25 00:02:09.467121 containerd[1990]: time="2026-04-25T00:02:09.467031721Z" level=info msg="StartContainer for \"77a4b448799b527cbb103fa42356ff23175b2cf774a4761ec27120c8fc0bcf09\"" Apr 25 00:02:09.866183 systemd[1]: Started cri-containerd-77a4b448799b527cbb103fa42356ff23175b2cf774a4761ec27120c8fc0bcf09.scope - libcontainer container 77a4b448799b527cbb103fa42356ff23175b2cf774a4761ec27120c8fc0bcf09. Apr 25 00:02:09.893150 containerd[1990]: time="2026-04-25T00:02:09.893099284Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:09.898493 containerd[1990]: time="2026-04-25T00:02:09.896782237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 25 00:02:09.902162 containerd[1990]: time="2026-04-25T00:02:09.902104014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 476.82435ms" Apr 25 00:02:09.902344 containerd[1990]: time="2026-04-25T00:02:09.902326749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 25 00:02:09.937080 containerd[1990]: time="2026-04-25T00:02:09.937035009Z" level=info msg="CreateContainer within sandbox \"eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 00:02:09.960842 containerd[1990]: time="2026-04-25T00:02:09.959063042Z" level=info msg="StartContainer for \"77a4b448799b527cbb103fa42356ff23175b2cf774a4761ec27120c8fc0bcf09\" returns successfully" Apr 25 00:02:09.985243 containerd[1990]: time="2026-04-25T00:02:09.985190916Z" level=info msg="CreateContainer within sandbox \"eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6716a9ec315d07043e9c091baacf3f3beaec5db3572db7c41c694d6a92c3dd44\"" Apr 25 00:02:09.986041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982492274.mount: Deactivated successfully. Apr 25 00:02:09.987915 containerd[1990]: time="2026-04-25T00:02:09.986855686Z" level=info msg="StartContainer for \"6716a9ec315d07043e9c091baacf3f3beaec5db3572db7c41c694d6a92c3dd44\"" Apr 25 00:02:10.042030 systemd[1]: Started cri-containerd-6716a9ec315d07043e9c091baacf3f3beaec5db3572db7c41c694d6a92c3dd44.scope - libcontainer container 6716a9ec315d07043e9c091baacf3f3beaec5db3572db7c41c694d6a92c3dd44. Apr 25 00:02:10.125348 containerd[1990]: time="2026-04-25T00:02:10.125232264Z" level=info msg="StartContainer for \"6716a9ec315d07043e9c091baacf3f3beaec5db3572db7c41c694d6a92c3dd44\" returns successfully" Apr 25 00:02:10.327655 kubelet[3201]: I0425 00:02:10.326486 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-69c6c7bbcf-8gdvg" podStartSLOduration=45.684154371 podStartE2EDuration="54.326457324s" podCreationTimestamp="2026-04-25 00:01:16 +0000 UTC" firstStartedPulling="2026-04-25 00:02:01.281703952 +0000 UTC m=+65.568581435" lastFinishedPulling="2026-04-25 00:02:09.924006879 +0000 UTC m=+74.210884388" observedRunningTime="2026-04-25 00:02:10.28008222 +0000 UTC m=+74.566959728" watchObservedRunningTime="2026-04-25 00:02:10.326457324 +0000 UTC m=+74.613334830" Apr 25 00:02:12.284882 kubelet[3201]: I0425 00:02:12.282997 3201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-rcpfl" podStartSLOduration=44.728027479 podStartE2EDuration="55.282975305s" podCreationTimestamp="2026-04-25 00:01:17 +0000 UTC" firstStartedPulling="2026-04-25 00:01:58.870038394 +0000 UTC m=+63.156915893" lastFinishedPulling="2026-04-25 00:02:09.424986223 +0000 UTC m=+73.711863719" observedRunningTime="2026-04-25 00:02:10.329660774 +0000 UTC m=+74.616538271" watchObservedRunningTime="2026-04-25 00:02:12.282975305 +0000 UTC m=+76.569852817" Apr 25 00:02:12.561222 systemd[1]: Started sshd@9-172.31.27.158:22-4.175.71.9:45666.service - OpenSSH per-connection server daemon (4.175.71.9:45666). Apr 25 00:02:13.643859 sshd[6489]: Accepted publickey for core from 4.175.71.9 port 45666 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:13.648765 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:13.658885 systemd-logind[1965]: New session 10 of user core. Apr 25 00:02:13.664208 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 25 00:02:15.396629 sshd[6489]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:15.402395 systemd[1]: sshd@9-172.31.27.158:22-4.175.71.9:45666.service: Deactivated successfully. Apr 25 00:02:15.402747 systemd-logind[1965]: Session 10 logged out. Waiting for processes to exit. Apr 25 00:02:15.406409 systemd[1]: session-10.scope: Deactivated successfully. Apr 25 00:02:15.410429 systemd-logind[1965]: Removed session 10. Apr 25 00:02:15.584406 systemd[1]: Started sshd@10-172.31.27.158:22-4.175.71.9:48410.service - OpenSSH per-connection server daemon (4.175.71.9:48410). Apr 25 00:02:16.618798 sshd[6503]: Accepted publickey for core from 4.175.71.9 port 48410 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:16.622383 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:16.630475 systemd-logind[1965]: New session 11 of user core. Apr 25 00:02:16.641141 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 25 00:02:18.034994 sshd[6503]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:18.058779 systemd[1]: sshd@10-172.31.27.158:22-4.175.71.9:48410.service: Deactivated successfully. Apr 25 00:02:18.065538 systemd[1]: session-11.scope: Deactivated successfully. Apr 25 00:02:18.071481 systemd-logind[1965]: Session 11 logged out. Waiting for processes to exit. Apr 25 00:02:18.076368 systemd-logind[1965]: Removed session 11. Apr 25 00:02:18.187261 systemd[1]: Started sshd@11-172.31.27.158:22-4.175.71.9:48412.service - OpenSSH per-connection server daemon (4.175.71.9:48412). Apr 25 00:02:19.259389 sshd[6552]: Accepted publickey for core from 4.175.71.9 port 48412 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:19.263276 sshd[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:19.272542 systemd-logind[1965]: New session 12 of user core. Apr 25 00:02:19.276071 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 25 00:02:20.208185 sshd[6552]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:20.214427 systemd[1]: sshd@11-172.31.27.158:22-4.175.71.9:48412.service: Deactivated successfully. Apr 25 00:02:20.215889 systemd-logind[1965]: Session 12 logged out. Waiting for processes to exit. Apr 25 00:02:20.219098 systemd[1]: session-12.scope: Deactivated successfully. Apr 25 00:02:20.221781 systemd-logind[1965]: Removed session 12. Apr 25 00:02:25.386323 systemd[1]: Started sshd@12-172.31.27.158:22-4.175.71.9:48426.service - OpenSSH per-connection server daemon (4.175.71.9:48426). Apr 25 00:02:26.436405 sshd[6597]: Accepted publickey for core from 4.175.71.9 port 48426 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:26.440441 sshd[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:26.447260 systemd-logind[1965]: New session 13 of user core. Apr 25 00:02:26.452042 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 25 00:02:27.515327 sshd[6597]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:27.519295 systemd[1]: sshd@12-172.31.27.158:22-4.175.71.9:48426.service: Deactivated successfully. Apr 25 00:02:27.521781 systemd[1]: session-13.scope: Deactivated successfully. Apr 25 00:02:27.524197 systemd-logind[1965]: Session 13 logged out. Waiting for processes to exit. Apr 25 00:02:27.525440 systemd-logind[1965]: Removed session 13. Apr 25 00:02:27.696274 systemd[1]: Started sshd@13-172.31.27.158:22-4.175.71.9:49134.service - OpenSSH per-connection server daemon (4.175.71.9:49134). Apr 25 00:02:28.734707 sshd[6610]: Accepted publickey for core from 4.175.71.9 port 49134 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:28.735456 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:28.740461 systemd-logind[1965]: New session 14 of user core. Apr 25 00:02:28.748161 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 25 00:02:29.995121 sshd[6610]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:29.998936 systemd[1]: sshd@13-172.31.27.158:22-4.175.71.9:49134.service: Deactivated successfully. Apr 25 00:02:30.002180 systemd[1]: session-14.scope: Deactivated successfully. Apr 25 00:02:30.004141 systemd-logind[1965]: Session 14 logged out. Waiting for processes to exit. Apr 25 00:02:30.005621 systemd-logind[1965]: Removed session 14. Apr 25 00:02:30.182474 systemd[1]: Started sshd@14-172.31.27.158:22-4.175.71.9:49136.service - OpenSSH per-connection server daemon (4.175.71.9:49136). Apr 25 00:02:31.240951 sshd[6621]: Accepted publickey for core from 4.175.71.9 port 49136 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:31.243585 sshd[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:31.263036 systemd-logind[1965]: New session 15 of user core. Apr 25 00:02:31.265040 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 25 00:02:33.072888 sshd[6621]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:33.088029 systemd-logind[1965]: Session 15 logged out. Waiting for processes to exit. Apr 25 00:02:33.089714 systemd[1]: sshd@14-172.31.27.158:22-4.175.71.9:49136.service: Deactivated successfully. Apr 25 00:02:33.093119 systemd[1]: session-15.scope: Deactivated successfully. Apr 25 00:02:33.095011 systemd-logind[1965]: Removed session 15. Apr 25 00:02:33.250326 systemd[1]: Started sshd@15-172.31.27.158:22-4.175.71.9:49152.service - OpenSSH per-connection server daemon (4.175.71.9:49152). Apr 25 00:02:34.359249 sshd[6657]: Accepted publickey for core from 4.175.71.9 port 49152 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:34.362225 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:34.369351 systemd-logind[1965]: New session 16 of user core. Apr 25 00:02:34.375104 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 25 00:02:36.372672 sshd[6657]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:36.376999 systemd[1]: sshd@15-172.31.27.158:22-4.175.71.9:49152.service: Deactivated successfully. Apr 25 00:02:36.380424 systemd[1]: session-16.scope: Deactivated successfully. Apr 25 00:02:36.382172 systemd-logind[1965]: Session 16 logged out. Waiting for processes to exit. Apr 25 00:02:36.383978 systemd-logind[1965]: Removed session 16. Apr 25 00:02:36.541257 systemd[1]: Started sshd@16-172.31.27.158:22-4.175.71.9:50744.service - OpenSSH per-connection server daemon (4.175.71.9:50744). Apr 25 00:02:37.562208 sshd[6670]: Accepted publickey for core from 4.175.71.9 port 50744 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:37.564192 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:37.569319 systemd-logind[1965]: New session 17 of user core. Apr 25 00:02:37.574210 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 25 00:02:38.486091 sshd[6670]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:38.491445 systemd-logind[1965]: Session 17 logged out. Waiting for processes to exit. Apr 25 00:02:38.492761 systemd[1]: sshd@16-172.31.27.158:22-4.175.71.9:50744.service: Deactivated successfully. Apr 25 00:02:38.495702 systemd[1]: session-17.scope: Deactivated successfully. Apr 25 00:02:38.497218 systemd-logind[1965]: Removed session 17. Apr 25 00:02:43.663608 systemd[1]: Started sshd@17-172.31.27.158:22-4.175.71.9:50750.service - OpenSSH per-connection server daemon (4.175.71.9:50750). Apr 25 00:02:44.744828 sshd[6756]: Accepted publickey for core from 4.175.71.9 port 50750 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:44.748366 sshd[6756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:44.755980 systemd-logind[1965]: New session 18 of user core. Apr 25 00:02:44.761141 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 25 00:02:45.899424 sshd[6756]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:45.906374 systemd[1]: sshd@17-172.31.27.158:22-4.175.71.9:50750.service: Deactivated successfully. Apr 25 00:02:45.910692 systemd[1]: session-18.scope: Deactivated successfully. Apr 25 00:02:45.912494 systemd-logind[1965]: Session 18 logged out. Waiting for processes to exit. Apr 25 00:02:45.914364 systemd-logind[1965]: Removed session 18. Apr 25 00:02:51.076581 systemd[1]: Started sshd@18-172.31.27.158:22-4.175.71.9:45540.service - OpenSSH per-connection server daemon (4.175.71.9:45540). Apr 25 00:02:52.122292 sshd[6793]: Accepted publickey for core from 4.175.71.9 port 45540 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:52.124563 sshd[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:52.134149 systemd-logind[1965]: New session 19 of user core. Apr 25 00:02:52.141099 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 00:02:53.232608 sshd[6793]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:53.237205 systemd[1]: sshd@18-172.31.27.158:22-4.175.71.9:45540.service: Deactivated successfully. Apr 25 00:02:53.240716 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 00:02:53.241686 systemd-logind[1965]: Session 19 logged out. Waiting for processes to exit. Apr 25 00:02:53.243259 systemd-logind[1965]: Removed session 19. Apr 25 00:02:58.402905 systemd[1]: Started sshd@19-172.31.27.158:22-4.175.71.9:36074.service - OpenSSH per-connection server daemon (4.175.71.9:36074). Apr 25 00:02:59.493174 sshd[6826]: Accepted publickey for core from 4.175.71.9 port 36074 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:02:59.496602 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:59.503526 systemd-logind[1965]: New session 20 of user core. Apr 25 00:02:59.507055 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 00:03:00.894578 containerd[1990]: time="2026-04-25T00:03:00.864164948Z" level=info msg="StopPodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\"" Apr 25 00:03:02.594698 sshd[6826]: pam_unix(sshd:session): session closed for user core Apr 25 00:03:02.666236 systemd[1]: sshd@19-172.31.27.158:22-4.175.71.9:36074.service: Deactivated successfully. Apr 25 00:03:02.684713 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 00:03:02.699290 systemd-logind[1965]: Session 20 logged out. Waiting for processes to exit. Apr 25 00:03:02.703583 systemd-logind[1965]: Removed session 20. Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.152 [WARNING][6844] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"fbf459d9-c3ed-42cc-9f78-25b84022bdb0", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2", Pod:"calico-apiserver-69c6c7bbcf-jntps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20080596c14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.169 [INFO][6844] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.169 [INFO][6844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" iface="eth0" netns="" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.170 [INFO][6844] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.170 [INFO][6844] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.818 [INFO][6857] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.821 [INFO][6857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.822 [INFO][6857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.844 [WARNING][6857] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.844 [INFO][6857] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.846 [INFO][6857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:03:03.852013 containerd[1990]: 2026-04-25 00:03:03.849 [INFO][6844] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:03.860268 containerd[1990]: time="2026-04-25T00:03:03.860186749Z" level=info msg="TearDown network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" successfully" Apr 25 00:03:03.860390 containerd[1990]: time="2026-04-25T00:03:03.860273466Z" level=info msg="StopPodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" returns successfully" Apr 25 00:03:03.917130 containerd[1990]: time="2026-04-25T00:03:03.917081658Z" level=info msg="RemovePodSandbox for \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\"" Apr 25 00:03:03.921834 containerd[1990]: time="2026-04-25T00:03:03.921623410Z" level=info msg="Forcibly stopping sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\"" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.027 [WARNING][6871] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"fbf459d9-c3ed-42cc-9f78-25b84022bdb0", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"0f249ea0d959610296b32bc44ae27a2ad39468fb3f27593840922b62aa9c0dc2", Pod:"calico-apiserver-69c6c7bbcf-jntps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20080596c14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.027 [INFO][6871] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.027 [INFO][6871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" iface="eth0" netns="" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.027 [INFO][6871] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.027 [INFO][6871] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.060 [INFO][6878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.060 [INFO][6878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.060 [INFO][6878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.068 [WARNING][6878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.068 [INFO][6878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" HandleID="k8s-pod-network.ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--jntps-eth0" Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.070 [INFO][6878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:03:04.077819 containerd[1990]: 2026-04-25 00:03:04.073 [INFO][6871] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb" Apr 25 00:03:04.084040 containerd[1990]: time="2026-04-25T00:03:04.077870234Z" level=info msg="TearDown network for sandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" successfully" Apr 25 00:03:04.213551 containerd[1990]: time="2026-04-25T00:03:04.213393142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:03:04.213551 containerd[1990]: time="2026-04-25T00:03:04.213524336Z" level=info msg="RemovePodSandbox \"ed3f0a01adfb897e02cbbabab677857bc6cfdd0a94a489aca9de207ca706f5fb\" returns successfully" Apr 25 00:03:04.215744 containerd[1990]: time="2026-04-25T00:03:04.215694092Z" level=info msg="StopPodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\"" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.266 [WARNING][6892] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"04413392-8f1c-4eff-8af3-8c2e64b92e0c", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08", Pod:"calico-apiserver-69c6c7bbcf-8gdvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65405d83ee5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.266 [INFO][6892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.266 [INFO][6892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" iface="eth0" netns="" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.266 [INFO][6892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.266 [INFO][6892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.297 [INFO][6899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.298 [INFO][6899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.298 [INFO][6899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.307 [WARNING][6899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.307 [INFO][6899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.309 [INFO][6899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:03:04.315049 containerd[1990]: 2026-04-25 00:03:04.312 [INFO][6892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.317239 containerd[1990]: time="2026-04-25T00:03:04.315101226Z" level=info msg="TearDown network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" successfully" Apr 25 00:03:04.317239 containerd[1990]: time="2026-04-25T00:03:04.315133705Z" level=info msg="StopPodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" returns successfully" Apr 25 00:03:04.317239 containerd[1990]: time="2026-04-25T00:03:04.316124781Z" level=info msg="RemovePodSandbox for \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\"" Apr 25 00:03:04.317239 containerd[1990]: time="2026-04-25T00:03:04.316182919Z" level=info msg="Forcibly stopping sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\"" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.374 [WARNING][6914] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0", GenerateName:"calico-apiserver-69c6c7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"04413392-8f1c-4eff-8af3-8c2e64b92e0c", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c6c7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"eb4fc1b51bd70b20a399a751a1bafcab873e4453befa31db3c0cd0548f65ed08", Pod:"calico-apiserver-69c6c7bbcf-8gdvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65405d83ee5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.374 [INFO][6914] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.374 [INFO][6914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" iface="eth0" netns="" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.374 [INFO][6914] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.376 [INFO][6914] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.409 [INFO][6921] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.409 [INFO][6921] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.409 [INFO][6921] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.419 [WARNING][6921] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.419 [INFO][6921] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" HandleID="k8s-pod-network.4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Workload="ip--172--31--27--158-k8s-calico--apiserver--69c6c7bbcf--8gdvg-eth0" Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.422 [INFO][6921] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:03:04.427053 containerd[1990]: 2026-04-25 00:03:04.424 [INFO][6914] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b" Apr 25 00:03:04.429780 containerd[1990]: time="2026-04-25T00:03:04.427102006Z" level=info msg="TearDown network for sandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" successfully" Apr 25 00:03:04.481874 containerd[1990]: time="2026-04-25T00:03:04.480272168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:03:04.481874 containerd[1990]: time="2026-04-25T00:03:04.480364821Z" level=info msg="RemovePodSandbox \"4c664e38ed1ddebd79dac12eca70529df9d2d43bdc5d27d55a46e7bf7413c02b\" returns successfully" Apr 25 00:03:04.481874 containerd[1990]: time="2026-04-25T00:03:04.481067595Z" level=info msg="StopPodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\"" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.569 [WARNING][6935] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a", ResourceVersion:"1381", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a", Pod:"goldmane-5b85766d88-rcpfl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07984bcb261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.569 [INFO][6935] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.569 [INFO][6935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" iface="eth0" netns="" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.569 [INFO][6935] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.569 [INFO][6935] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.604 [INFO][6942] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.604 [INFO][6942] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.604 [INFO][6942] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.612 [WARNING][6942] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.612 [INFO][6942] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.615 [INFO][6942] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:03:04.623705 containerd[1990]: 2026-04-25 00:03:04.619 [INFO][6935] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.626088 containerd[1990]: time="2026-04-25T00:03:04.623736247Z" level=info msg="TearDown network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" successfully" Apr 25 00:03:04.626088 containerd[1990]: time="2026-04-25T00:03:04.623769008Z" level=info msg="StopPodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" returns successfully" Apr 25 00:03:04.634827 containerd[1990]: time="2026-04-25T00:03:04.634740308Z" level=info msg="RemovePodSandbox for \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\"" Apr 25 00:03:04.635137 containerd[1990]: time="2026-04-25T00:03:04.634845409Z" level=info msg="Forcibly stopping sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\"" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.700 [WARNING][6956] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"22e1b7c8-1a20-4649-bf8c-3b2a82e5872a", ResourceVersion:"1381", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-158", ContainerID:"b9848d6bc331214d9e2538af2e86f833beb527c2ac2bc1db765e00c779106e9a", Pod:"goldmane-5b85766d88-rcpfl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07984bcb261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.706 [INFO][6956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.706 [INFO][6956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" iface="eth0" netns="" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.706 [INFO][6956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.706 [INFO][6956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.754 [INFO][6963] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.754 [INFO][6963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.755 [INFO][6963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.763 [WARNING][6963] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.763 [INFO][6963] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" HandleID="k8s-pod-network.8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Workload="ip--172--31--27--158-k8s-goldmane--5b85766d88--rcpfl-eth0" Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.766 [INFO][6963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:03:04.801922 containerd[1990]: 2026-04-25 00:03:04.781 [INFO][6956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3" Apr 25 00:03:04.801922 containerd[1990]: time="2026-04-25T00:03:04.801455397Z" level=info msg="TearDown network for sandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" successfully" Apr 25 00:03:04.840686 containerd[1990]: time="2026-04-25T00:03:04.840605325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:03:04.840866 containerd[1990]: time="2026-04-25T00:03:04.840733067Z" level=info msg="RemovePodSandbox \"8480b8bfc3b88997b841ed2b5efba0aa4c3c3df5b6c0bbe2aee2e6390efb87d3\" returns successfully" Apr 25 00:03:12.483051 systemd[1]: run-containerd-runc-k8s.io-77a4b448799b527cbb103fa42356ff23175b2cf774a4761ec27120c8fc0bcf09-runc.haZJz7.mount: Deactivated successfully. Apr 25 00:03:16.358992 systemd[1]: cri-containerd-324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb.scope: Deactivated successfully. Apr 25 00:03:16.359288 systemd[1]: cri-containerd-324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb.scope: Consumed 9.770s CPU time. Apr 25 00:03:16.555258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb-rootfs.mount: Deactivated successfully. Apr 25 00:03:16.575483 containerd[1990]: time="2026-04-25T00:03:16.565899222Z" level=info msg="shim disconnected" id=324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb namespace=k8s.io Apr 25 00:03:16.576094 containerd[1990]: time="2026-04-25T00:03:16.575481982Z" level=warning msg="cleaning up after shim disconnected" id=324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb namespace=k8s.io Apr 25 00:03:16.576094 containerd[1990]: time="2026-04-25T00:03:16.575506991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:03:16.683379 containerd[1990]: time="2026-04-25T00:03:16.683238312Z" level=warning msg="cleanup warnings time=\"2026-04-25T00:03:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 25 00:03:17.142986 systemd[1]: cri-containerd-859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451.scope: Deactivated successfully. Apr 25 00:03:17.143359 systemd[1]: cri-containerd-859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451.scope: Consumed 3.584s CPU time, 17.0M memory peak, 0B memory swap peak. Apr 25 00:03:17.174026 containerd[1990]: time="2026-04-25T00:03:17.173747338Z" level=info msg="shim disconnected" id=859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451 namespace=k8s.io Apr 25 00:03:17.174026 containerd[1990]: time="2026-04-25T00:03:17.173840763Z" level=warning msg="cleaning up after shim disconnected" id=859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451 namespace=k8s.io Apr 25 00:03:17.174026 containerd[1990]: time="2026-04-25T00:03:17.173855775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:03:17.182451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451-rootfs.mount: Deactivated successfully. Apr 25 00:03:17.622490 kubelet[3201]: I0425 00:03:17.622426 3201 scope.go:117] "RemoveContainer" containerID="859ddf4f3687150767fa3e7b0495b33caf18f0057d30c1f3411b1cc513f68451" Apr 25 00:03:17.628385 kubelet[3201]: I0425 00:03:17.628188 3201 scope.go:117] "RemoveContainer" containerID="324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb" Apr 25 00:03:17.799323 containerd[1990]: time="2026-04-25T00:03:17.799259551Z" level=info msg="CreateContainer within sandbox \"65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 25 00:03:17.805521 containerd[1990]: time="2026-04-25T00:03:17.805468647Z" level=info msg="CreateContainer within sandbox \"512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 25 00:03:17.875614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167051441.mount: Deactivated successfully. Apr 25 00:03:17.891189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178599608.mount: Deactivated successfully. Apr 25 00:03:17.893181 containerd[1990]: time="2026-04-25T00:03:17.893132786Z" level=info msg="CreateContainer within sandbox \"512811563e1a4350cdb7a5904bef113a1a666c4f38d90ad0626adee6e203781c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898\"" Apr 25 00:03:17.901294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428009968.mount: Deactivated successfully. Apr 25 00:03:17.902735 containerd[1990]: time="2026-04-25T00:03:17.901517372Z" level=info msg="CreateContainer within sandbox \"65cec554f281c6c3a15771b3ccaeee95c5d560e419f4a9b201de3034fde1e1d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dfc059dae48ed518af2562e6029e4c570f72ccf3e247808b66164d833f0a68cb\"" Apr 25 00:03:17.904430 containerd[1990]: time="2026-04-25T00:03:17.904392907Z" level=info msg="StartContainer for \"58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898\"" Apr 25 00:03:17.904779 containerd[1990]: time="2026-04-25T00:03:17.904406473Z" level=info msg="StartContainer for \"dfc059dae48ed518af2562e6029e4c570f72ccf3e247808b66164d833f0a68cb\"" Apr 25 00:03:17.983227 systemd[1]: Started cri-containerd-dfc059dae48ed518af2562e6029e4c570f72ccf3e247808b66164d833f0a68cb.scope - libcontainer container dfc059dae48ed518af2562e6029e4c570f72ccf3e247808b66164d833f0a68cb. Apr 25 00:03:17.996220 systemd[1]: Started cri-containerd-58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898.scope - libcontainer container 58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898. Apr 25 00:03:18.079090 containerd[1990]: time="2026-04-25T00:03:18.078442943Z" level=info msg="StartContainer for \"58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898\" returns successfully" Apr 25 00:03:18.099241 containerd[1990]: time="2026-04-25T00:03:18.099203181Z" level=info msg="StartContainer for \"dfc059dae48ed518af2562e6029e4c570f72ccf3e247808b66164d833f0a68cb\" returns successfully" Apr 25 00:03:18.272288 kubelet[3201]: E0425 00:03:18.271783 3201 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-158?timeout=10s\": context deadline exceeded" Apr 25 00:03:22.589058 systemd[1]: cri-containerd-b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2.scope: Deactivated successfully. Apr 25 00:03:22.589769 systemd[1]: cri-containerd-b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2.scope: Consumed 2.115s CPU time, 16.3M memory peak, 0B memory swap peak. Apr 25 00:03:22.641146 containerd[1990]: time="2026-04-25T00:03:22.640858828Z" level=info msg="shim disconnected" id=b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2 namespace=k8s.io Apr 25 00:03:22.641146 containerd[1990]: time="2026-04-25T00:03:22.640949664Z" level=warning msg="cleaning up after shim disconnected" id=b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2 namespace=k8s.io Apr 25 00:03:22.641146 containerd[1990]: time="2026-04-25T00:03:22.640964384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:03:22.646716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2-rootfs.mount: Deactivated successfully. Apr 25 00:03:23.656266 kubelet[3201]: I0425 00:03:23.656237 3201 scope.go:117] "RemoveContainer" containerID="b55ac14f753b18181235717747bfdbfbe4d4d09812a157b6f3a62d5d87dde3d2" Apr 25 00:03:23.658561 systemd[1]: run-containerd-runc-k8s.io-8379b11038cc6d2c598eeaffd03d7ae641456d07418058b37bd8731574e70377-runc.3EZVQa.mount: Deactivated successfully. Apr 25 00:03:23.661152 containerd[1990]: time="2026-04-25T00:03:23.661024112Z" level=info msg="CreateContainer within sandbox \"7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 25 00:03:23.708288 containerd[1990]: time="2026-04-25T00:03:23.708172132Z" level=info msg="CreateContainer within sandbox \"7c06d652a475da75c8fa72d55d8dfab7a1dd27443dac69e8b1a4205609cd9eb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8d971b77ccf014092ec2a67138a82c51a659367cb2c066a8b57df7e5cd3b250f\"" Apr 25 00:03:23.709078 containerd[1990]: time="2026-04-25T00:03:23.709042065Z" level=info msg="StartContainer for \"8d971b77ccf014092ec2a67138a82c51a659367cb2c066a8b57df7e5cd3b250f\"" Apr 25 00:03:23.762102 systemd[1]: Started cri-containerd-8d971b77ccf014092ec2a67138a82c51a659367cb2c066a8b57df7e5cd3b250f.scope - libcontainer container 8d971b77ccf014092ec2a67138a82c51a659367cb2c066a8b57df7e5cd3b250f. Apr 25 00:03:23.819124 containerd[1990]: time="2026-04-25T00:03:23.819056786Z" level=info msg="StartContainer for \"8d971b77ccf014092ec2a67138a82c51a659367cb2c066a8b57df7e5cd3b250f\" returns successfully" Apr 25 00:03:28.278526 kubelet[3201]: E0425 00:03:28.278459 3201 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-158?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 25 00:03:29.920876 systemd[1]: cri-containerd-58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898.scope: Deactivated successfully. Apr 25 00:03:29.951054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898-rootfs.mount: Deactivated successfully. Apr 25 00:03:29.966660 containerd[1990]: time="2026-04-25T00:03:29.966312769Z" level=info msg="shim disconnected" id=58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898 namespace=k8s.io Apr 25 00:03:29.966660 containerd[1990]: time="2026-04-25T00:03:29.966378283Z" level=warning msg="cleaning up after shim disconnected" id=58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898 namespace=k8s.io Apr 25 00:03:29.966660 containerd[1990]: time="2026-04-25T00:03:29.966391141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:03:30.688648 kubelet[3201]: I0425 00:03:30.688605 3201 scope.go:117] "RemoveContainer" containerID="324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb" Apr 25 00:03:30.689176 kubelet[3201]: I0425 00:03:30.689101 3201 scope.go:117] "RemoveContainer" containerID="58f2a59c2b2a346c8c7e72a43431ff0df637a18a7b8f6c84f44acbe5eb26b898" Apr 25 00:03:30.696592 kubelet[3201]: E0425 00:03:30.695923 3201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-zsfts_tigera-operator(c70c6034-240c-48c4-a391-118af0f72156)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-zsfts" podUID="c70c6034-240c-48c4-a391-118af0f72156" Apr 25 00:03:30.785859 containerd[1990]: time="2026-04-25T00:03:30.785764506Z" level=info msg="RemoveContainer for \"324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb\"" Apr 25 00:03:30.807891 containerd[1990]: time="2026-04-25T00:03:30.807818506Z" level=info msg="RemoveContainer for \"324d4836eaec78f3bf839001abb1decbe19afe1c5e51627a2befea465924a4cb\" returns successfully"