Apr 17 00:20:24.875022 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 17 00:20:24.875047 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:20:24.875059 kernel: BIOS-provided physical RAM map: Apr 17 00:20:24.875066 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 00:20:24.875073 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 00:20:24.875079 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Apr 17 00:20:24.875090 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 00:20:24.875202 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 00:20:24.875215 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 00:20:24.875226 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 00:20:24.875238 kernel: NX (Execute Disable) protection: active Apr 17 00:20:24.875252 kernel: APIC: Static calls initialized Apr 17 00:20:24.875263 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Apr 17 00:20:24.875275 kernel: extended physical RAM map: Apr 17 00:20:24.875289 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 00:20:24.875301 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Apr 17 00:20:24.875316 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Apr 17 00:20:24.875328 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Apr 17 00:20:24.875340 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Apr 17 00:20:24.875353 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 00:20:24.875365 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 00:20:24.875377 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 00:20:24.875389 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 00:20:24.875401 kernel: efi: EFI v2.7 by EDK II Apr 17 00:20:24.875414 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 00:20:24.875426 kernel: secureboot: Secure boot disabled Apr 17 00:20:24.875438 kernel: SMBIOS 2.7 present. Apr 17 00:20:24.875454 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 00:20:24.875466 kernel: DMI: Memory slots populated: 1/1 Apr 17 00:20:24.875477 kernel: Hypervisor detected: KVM Apr 17 00:20:24.875490 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 00:20:24.875502 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 00:20:24.875515 kernel: kvm-clock: using sched offset of 4923353053 cycles Apr 17 00:20:24.875528 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 00:20:24.875541 kernel: tsc: Detected 2499.996 MHz processor Apr 17 00:20:24.875554 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 00:20:24.875566 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 00:20:24.875579 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 00:20:24.875595 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 00:20:24.875609 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 00:20:24.875627 kernel: Using GB pages for direct mapping Apr 17 00:20:24.875641 kernel: ACPI: Early table checksum verification disabled Apr 17 00:20:24.875654 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 00:20:24.875668 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 00:20:24.875685 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 00:20:24.875699 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 00:20:24.875713 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 00:20:24.875727 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 00:20:24.875741 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 00:20:24.875754 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 00:20:24.875768 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 00:20:24.875782 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 00:20:24.875799 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 00:20:24.875813 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 00:20:24.875827 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 00:20:24.875841 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 00:20:24.875854 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 00:20:24.875868 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 00:20:24.875882 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 00:20:24.875896 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 00:20:24.875909 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 00:20:24.875927 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 00:20:24.875940 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 00:20:24.875954 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 00:20:24.875968 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 00:20:24.875981 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 00:20:24.875995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 00:20:24.876009 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 00:20:24.876023 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Apr 17 00:20:24.876037 kernel: Zone ranges: Apr 17 00:20:24.876054 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 00:20:24.876068 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 00:20:24.876082 kernel: Normal empty Apr 17 00:20:24.876110 kernel: Device empty Apr 17 00:20:24.876124 kernel: Movable zone start for each node Apr 17 00:20:24.876138 kernel: Early memory node ranges Apr 17 00:20:24.876152 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 00:20:24.876167 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 00:20:24.876181 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 00:20:24.876198 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 00:20:24.876213 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 00:20:24.876227 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 00:20:24.876240 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 00:20:24.876253 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 00:20:24.876266 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 00:20:24.876280 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 00:20:24.876293 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 00:20:24.876307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 00:20:24.876322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 00:20:24.876340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 00:20:24.876353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 00:20:24.876366 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 00:20:24.876381 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 00:20:24.876394 kernel: TSC deadline timer available Apr 17 00:20:24.876410 kernel: CPU topo: Max. logical packages: 1 Apr 17 00:20:24.876423 kernel: CPU topo: Max. logical dies: 1 Apr 17 00:20:24.876436 kernel: CPU topo: Max. dies per package: 1 Apr 17 00:20:24.876450 kernel: CPU topo: Max. threads per core: 2 Apr 17 00:20:24.876466 kernel: CPU topo: Num. cores per package: 1 Apr 17 00:20:24.876480 kernel: CPU topo: Num. threads per package: 2 Apr 17 00:20:24.876493 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 17 00:20:24.876507 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 00:20:24.876521 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 00:20:24.876534 kernel: Booting paravirtualized kernel on KVM Apr 17 00:20:24.876548 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 00:20:24.876562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 00:20:24.876575 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 17 00:20:24.876593 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 17 00:20:24.876606 kernel: pcpu-alloc: [0] 0 1 Apr 17 00:20:24.876619 kernel: kvm-guest: PV spinlocks enabled Apr 17 00:20:24.876634 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 00:20:24.876651 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:20:24.876666 kernel: random: crng init done Apr 17 00:20:24.876680 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 00:20:24.876694 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 00:20:24.876712 kernel: Fallback order for Node 0: 0 Apr 17 00:20:24.876726 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Apr 17 00:20:24.876741 kernel: Policy zone: DMA32 Apr 17 00:20:24.876766 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 00:20:24.876784 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 00:20:24.876800 kernel: Kernel/User page tables isolation: enabled Apr 17 00:20:24.876815 kernel: ftrace: allocating 40126 entries in 157 pages Apr 17 00:20:24.876829 kernel: ftrace: allocated 157 pages with 5 groups Apr 17 00:20:24.876844 kernel: Dynamic Preempt: voluntary Apr 17 00:20:24.876861 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 00:20:24.876879 kernel: rcu: RCU event tracing is enabled. Apr 17 00:20:24.876894 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 00:20:24.876913 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 00:20:24.876929 kernel: Rude variant of Tasks RCU enabled. Apr 17 00:20:24.876944 kernel: Tracing variant of Tasks RCU enabled. Apr 17 00:20:24.876960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 00:20:24.876976 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 00:20:24.876996 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 00:20:24.877013 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 00:20:24.877030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 00:20:24.877046 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 00:20:24.877062 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 00:20:24.877078 kernel: Console: colour dummy device 80x25 Apr 17 00:20:24.877109 kernel: printk: legacy console [tty0] enabled Apr 17 00:20:24.877123 kernel: printk: legacy console [ttyS0] enabled Apr 17 00:20:24.877136 kernel: ACPI: Core revision 20240827 Apr 17 00:20:24.877154 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 00:20:24.877166 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 00:20:24.877179 kernel: x2apic enabled Apr 17 00:20:24.877193 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 00:20:24.877208 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 00:20:24.877222 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 17 00:20:24.877239 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 00:20:24.877253 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 00:20:24.877266 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 00:20:24.877284 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 00:20:24.877297 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 00:20:24.877311 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 00:20:24.877325 kernel: RETBleed: Vulnerable Apr 17 00:20:24.877339 kernel: Speculative Store Bypass: Vulnerable Apr 17 00:20:24.877352 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 00:20:24.877366 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 00:20:24.877379 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 00:20:24.877393 kernel: active return thunk: its_return_thunk Apr 17 00:20:24.877407 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 00:20:24.877420 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 00:20:24.877437 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 00:20:24.877451 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 00:20:24.877464 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 00:20:24.877477 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 00:20:24.877491 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 00:20:24.877504 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 00:20:24.877519 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 00:20:24.877532 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 00:20:24.877547 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 00:20:24.877560 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 00:20:24.877573 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 00:20:24.877590 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 00:20:24.877890 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 00:20:24.877909 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 00:20:24.877924 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 00:20:24.877938 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 00:20:24.877953 kernel: Freeing SMP alternatives memory: 32K Apr 17 00:20:24.877968 kernel: pid_max: default: 32768 minimum: 301 Apr 17 00:20:24.877983 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 00:20:24.877998 kernel: landlock: Up and running. Apr 17 00:20:24.878013 kernel: SELinux: Initializing. Apr 17 00:20:24.878028 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 00:20:24.878047 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 00:20:24.878062 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 00:20:24.878077 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 00:20:24.878111 kernel: signal: max sigframe size: 3632 Apr 17 00:20:24.878127 kernel: rcu: Hierarchical SRCU implementation. Apr 17 00:20:24.878143 kernel: rcu: Max phase no-delay instances is 400. Apr 17 00:20:24.878158 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 00:20:24.878174 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 00:20:24.878189 kernel: smp: Bringing up secondary CPUs ... Apr 17 00:20:24.878205 kernel: smpboot: x86: Booting SMP configuration: Apr 17 00:20:24.878224 kernel: .... node #0, CPUs: #1 Apr 17 00:20:24.878238 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 00:20:24.878253 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 00:20:24.878267 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 00:20:24.878280 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 17 00:20:24.878295 kernel: Memory: 1862104K/2037804K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 171128K reserved, 0K cma-reserved) Apr 17 00:20:24.878309 kernel: devtmpfs: initialized Apr 17 00:20:24.878324 kernel: x86/mm: Memory block size: 128MB Apr 17 00:20:24.878341 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 00:20:24.878355 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 00:20:24.878370 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 00:20:24.878384 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 00:20:24.878398 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 00:20:24.878412 kernel: audit: initializing netlink subsys (disabled) Apr 17 00:20:24.878426 kernel: audit: type=2000 audit(1776385223.058:1): state=initialized audit_enabled=0 res=1 Apr 17 00:20:24.878440 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 00:20:24.878454 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 00:20:24.878472 kernel: cpuidle: using governor menu Apr 17 00:20:24.878486 kernel: efi: Freeing EFI boot services memory: 37748K Apr 17 00:20:24.878500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 00:20:24.878514 kernel: dca service started, version 1.12.1 Apr 17 00:20:24.878528 kernel: PCI: Using configuration type 1 for base access Apr 17 00:20:24.878552 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 00:20:24.878564 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 00:20:24.878577 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 00:20:24.878590 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 00:20:24.878607 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 00:20:24.878621 kernel: ACPI: Added _OSI(Module Device) Apr 17 00:20:24.878635 kernel: ACPI: Added _OSI(Processor Device) Apr 17 00:20:24.878649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 00:20:24.878663 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 00:20:24.878676 kernel: ACPI: Interpreter enabled Apr 17 00:20:24.878690 kernel: ACPI: PM: (supports S0 S5) Apr 17 00:20:24.878704 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 00:20:24.878718 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 00:20:24.878734 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 00:20:24.878748 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 00:20:24.878762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 00:20:24.878977 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 00:20:24.879145 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 00:20:24.879275 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 00:20:24.879292 kernel: acpiphp: Slot [3] registered Apr 17 00:20:24.879311 kernel: acpiphp: Slot [4] registered Apr 17 00:20:24.879326 kernel: acpiphp: Slot [5] registered Apr 17 00:20:24.879341 kernel: acpiphp: Slot [6] registered Apr 17 00:20:24.879355 kernel: acpiphp: Slot [7] registered Apr 17 00:20:24.879369 kernel: acpiphp: Slot [8] registered Apr 17 00:20:24.879382 kernel: acpiphp: Slot [9] registered Apr 17 00:20:24.879396 kernel: acpiphp: Slot [10] registered Apr 17 00:20:24.879410 kernel: acpiphp: Slot [11] registered Apr 17 00:20:24.879424 kernel: acpiphp: Slot [12] registered Apr 17 00:20:24.879438 kernel: acpiphp: Slot [13] registered Apr 17 00:20:24.879454 kernel: acpiphp: Slot [14] registered Apr 17 00:20:24.879469 kernel: acpiphp: Slot [15] registered Apr 17 00:20:24.879483 kernel: acpiphp: Slot [16] registered Apr 17 00:20:24.879497 kernel: acpiphp: Slot [17] registered Apr 17 00:20:24.879510 kernel: acpiphp: Slot [18] registered Apr 17 00:20:24.879524 kernel: acpiphp: Slot [19] registered Apr 17 00:20:24.879538 kernel: acpiphp: Slot [20] registered Apr 17 00:20:24.879551 kernel: acpiphp: Slot [21] registered Apr 17 00:20:24.879565 kernel: acpiphp: Slot [22] registered Apr 17 00:20:24.879581 kernel: acpiphp: Slot [23] registered Apr 17 00:20:24.879595 kernel: acpiphp: Slot [24] registered Apr 17 00:20:24.879608 kernel: acpiphp: Slot [25] registered Apr 17 00:20:24.879622 kernel: acpiphp: Slot [26] registered Apr 17 00:20:24.879636 kernel: acpiphp: Slot [27] registered Apr 17 00:20:24.879650 kernel: acpiphp: Slot [28] registered Apr 17 00:20:24.879663 kernel: acpiphp: Slot [29] registered Apr 17 00:20:24.879676 kernel: acpiphp: Slot [30] registered Apr 17 00:20:24.879690 kernel: acpiphp: Slot [31] registered Apr 17 00:20:24.879704 kernel: PCI host bridge to bus 0000:00 Apr 17 00:20:24.879859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 00:20:24.879976 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 00:20:24.880102 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 00:20:24.880733 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 00:20:24.880885 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 00:20:24.881201 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 00:20:24.881389 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Apr 17 00:20:24.881539 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Apr 17 00:20:24.881683 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Apr 17 00:20:24.881819 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 00:20:24.881954 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 00:20:24.882106 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 00:20:24.883262 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 00:20:24.883414 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 00:20:24.883552 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 00:20:24.883688 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 00:20:24.883838 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 00:20:24.883978 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Apr 17 00:20:24.885163 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 17 00:20:24.885312 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 00:20:24.885463 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Apr 17 00:20:24.885599 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Apr 17 00:20:24.885746 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Apr 17 00:20:24.885930 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Apr 17 00:20:24.885952 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 00:20:24.885968 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 00:20:24.885989 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 00:20:24.886005 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 00:20:24.886020 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 00:20:24.886036 kernel: iommu: Default domain type: Translated Apr 17 00:20:24.886051 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 00:20:24.886066 kernel: efivars: Registered efivars operations Apr 17 00:20:24.886081 kernel: PCI: Using ACPI for IRQ routing Apr 17 00:20:24.886155 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 00:20:24.886171 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Apr 17 00:20:24.886186 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 00:20:24.886205 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 00:20:24.886355 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 00:20:24.886489 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 00:20:24.886627 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 00:20:24.886645 kernel: vgaarb: loaded Apr 17 00:20:24.886660 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 00:20:24.886674 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 00:20:24.886688 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 00:20:24.886706 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 00:20:24.886720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 00:20:24.886735 kernel: pnp: PnP ACPI init Apr 17 00:20:24.886748 kernel: pnp: PnP ACPI: found 5 devices Apr 17 00:20:24.886763 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 00:20:24.886777 kernel: NET: Registered PF_INET protocol family Apr 17 00:20:24.886791 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 00:20:24.886805 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 00:20:24.886819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 00:20:24.886836 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 00:20:24.886850 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 00:20:24.886864 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 00:20:24.886878 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 00:20:24.886892 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 00:20:24.886905 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 00:20:24.886919 kernel: NET: Registered PF_XDP protocol family Apr 17 00:20:24.887034 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 00:20:24.887720 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 00:20:24.887862 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 00:20:24.887983 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 00:20:24.888164 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 00:20:24.888309 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 00:20:24.888330 kernel: PCI: CLS 0 bytes, default 64 Apr 17 00:20:24.888347 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 00:20:24.888364 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 00:20:24.888380 kernel: clocksource: Switched to clocksource tsc Apr 17 00:20:24.888400 kernel: Initialise system trusted keyrings Apr 17 00:20:24.888416 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 00:20:24.888431 kernel: Key type asymmetric registered Apr 17 00:20:24.888446 kernel: Asymmetric key parser 'x509' registered Apr 17 00:20:24.888463 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 00:20:24.888479 kernel: io scheduler mq-deadline registered Apr 17 00:20:24.888495 kernel: io scheduler kyber registered Apr 17 00:20:24.888511 kernel: io scheduler bfq registered Apr 17 00:20:24.888526 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 00:20:24.888545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 00:20:24.888562 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 00:20:24.888578 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 00:20:24.888594 kernel: i8042: Warning: Keylock active Apr 17 00:20:24.888609 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 00:20:24.888625 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 00:20:24.888789 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 00:20:24.888921 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 00:20:24.889053 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T00:20:24 UTC (1776385224) Apr 17 00:20:24.890252 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 00:20:24.890280 kernel: intel_pstate: CPU model not supported Apr 17 00:20:24.890297 kernel: efifb: probing for efifb Apr 17 00:20:24.890312 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Apr 17 00:20:24.890329 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 00:20:24.890345 kernel: efifb: scrolling: redraw Apr 17 00:20:24.890360 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 00:20:24.890374 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 00:20:24.890394 kernel: fb0: EFI VGA frame buffer device Apr 17 00:20:24.890410 kernel: pstore: Using crash dump compression: deflate Apr 17 00:20:24.890425 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 00:20:24.890441 kernel: NET: Registered PF_INET6 protocol family Apr 17 00:20:24.890456 kernel: Segment Routing with IPv6 Apr 17 00:20:24.890470 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 00:20:24.890486 kernel: NET: Registered PF_PACKET protocol family Apr 17 00:20:24.890503 kernel: Key type dns_resolver registered Apr 17 00:20:24.890519 kernel: IPI shorthand broadcast: enabled Apr 17 00:20:24.890549 kernel: sched_clock: Marking stable (2567002642, 146764713)->(2781628971, -67861616) Apr 17 00:20:24.890566 kernel: registered taskstats version 1 Apr 17 00:20:24.890581 kernel: Loading compiled-in X.509 certificates Apr 17 00:20:24.890599 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 17 00:20:24.890613 kernel: Demotion targets for Node 0: null Apr 17 00:20:24.890628 kernel: Key type .fscrypt registered Apr 17 00:20:24.890641 kernel: Key type fscrypt-provisioning registered Apr 17 00:20:24.890655 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 00:20:24.890670 kernel: ima: Allocated hash algorithm: sha1 Apr 17 00:20:24.890689 kernel: ima: No architecture policies found Apr 17 00:20:24.890704 kernel: clk: Disabling unused clocks Apr 17 00:20:24.890720 kernel: Warning: unable to open an initial console. Apr 17 00:20:24.890741 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 17 00:20:24.890755 kernel: Write protecting the kernel read-only data: 40960k Apr 17 00:20:24.890769 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 17 00:20:24.890786 kernel: Run /init as init process Apr 17 00:20:24.890801 kernel: with arguments: Apr 17 00:20:24.890818 kernel: /init Apr 17 00:20:24.890834 kernel: with environment: Apr 17 00:20:24.890851 kernel: HOME=/ Apr 17 00:20:24.890868 kernel: TERM=linux Apr 17 00:20:24.890885 systemd[1]: Successfully made /usr/ read-only. Apr 17 00:20:24.890907 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 00:20:24.890929 systemd[1]: Detected virtualization amazon. Apr 17 00:20:24.890947 systemd[1]: Detected architecture x86-64. Apr 17 00:20:24.890964 systemd[1]: Running in initrd. Apr 17 00:20:24.890981 systemd[1]: No hostname configured, using default hostname. Apr 17 00:20:24.890999 systemd[1]: Hostname set to . Apr 17 00:20:24.891016 systemd[1]: Initializing machine ID from VM UUID. Apr 17 00:20:24.891034 systemd[1]: Queued start job for default target initrd.target. Apr 17 00:20:24.891054 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:20:24.891071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:20:24.893125 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 00:20:24.893154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 00:20:24.893170 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 00:20:24.893187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 00:20:24.893205 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 00:20:24.893226 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 00:20:24.893242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:20:24.893257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:20:24.893273 systemd[1]: Reached target paths.target - Path Units. Apr 17 00:20:24.893289 systemd[1]: Reached target slices.target - Slice Units. Apr 17 00:20:24.893306 systemd[1]: Reached target swap.target - Swaps. Apr 17 00:20:24.893323 systemd[1]: Reached target timers.target - Timer Units. Apr 17 00:20:24.893340 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 00:20:24.893356 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 00:20:24.893376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 00:20:24.893393 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 00:20:24.893411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:20:24.893428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 00:20:24.893445 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:20:24.893462 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 00:20:24.893479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 00:20:24.893495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 00:20:24.893514 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 00:20:24.893532 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 00:20:24.893549 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 00:20:24.893566 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 00:20:24.893583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 00:20:24.893601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:20:24.893655 systemd-journald[188]: Collecting audit messages is disabled. Apr 17 00:20:24.893697 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 00:20:24.893715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:20:24.893735 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 00:20:24.893753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 00:20:24.893771 systemd-journald[188]: Journal started Apr 17 00:20:24.893806 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2531c93d05e387c944df81f30ce3d0) is 4.7M, max 38.1M, 33.3M free. Apr 17 00:20:24.908369 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 00:20:24.912293 systemd-modules-load[190]: Inserted module 'overlay' Apr 17 00:20:24.921865 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 00:20:24.924506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:20:24.927475 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:20:24.937300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 00:20:24.942248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 00:20:24.954379 systemd-tmpfiles[202]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 00:20:24.966695 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 00:20:24.965870 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:20:24.974211 kernel: Bridge firewalling registered Apr 17 00:20:24.974035 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 17 00:20:24.976148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 00:20:24.978035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:20:24.982397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 00:20:24.991303 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 00:20:24.993178 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 00:20:25.002714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:20:25.008775 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 00:20:25.009265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 00:20:25.023580 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:20:25.069784 systemd-resolved[230]: Positive Trust Anchors: Apr 17 00:20:25.070200 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 00:20:25.070265 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 00:20:25.078476 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 17 00:20:25.080082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 00:20:25.082221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:20:25.128124 kernel: SCSI subsystem initialized Apr 17 00:20:25.138115 kernel: Loading iSCSI transport class v2.0-870. Apr 17 00:20:25.150118 kernel: iscsi: registered transport (tcp) Apr 17 00:20:25.172308 kernel: iscsi: registered transport (qla4xxx) Apr 17 00:20:25.172390 kernel: QLogic iSCSI HBA Driver Apr 17 00:20:25.191720 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 00:20:25.208737 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:20:25.209799 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 00:20:25.256703 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 00:20:25.258988 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 00:20:25.313124 kernel: raid6: avx512x4 gen() 17545 MB/s Apr 17 00:20:25.331118 kernel: raid6: avx512x2 gen() 17460 MB/s Apr 17 00:20:25.349120 kernel: raid6: avx512x1 gen() 17668 MB/s Apr 17 00:20:25.367117 kernel: raid6: avx2x4 gen() 17194 MB/s Apr 17 00:20:25.385117 kernel: raid6: avx2x2 gen() 17550 MB/s Apr 17 00:20:25.403401 kernel: raid6: avx2x1 gen() 13211 MB/s Apr 17 00:20:25.403471 kernel: raid6: using algorithm avx512x1 gen() 17668 MB/s Apr 17 00:20:25.422392 kernel: raid6: .... xor() 21510 MB/s, rmw enabled Apr 17 00:20:25.422459 kernel: raid6: using avx512x2 recovery algorithm Apr 17 00:20:25.444134 kernel: xor: automatically using best checksumming function avx Apr 17 00:20:25.612142 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 00:20:25.619426 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 00:20:25.621667 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:20:25.648559 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 17 00:20:25.655294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:20:25.660347 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 00:20:25.683400 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Apr 17 00:20:25.711332 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 00:20:25.713250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 00:20:25.774925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:20:25.778879 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 00:20:25.863119 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 00:20:25.887228 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 00:20:25.887487 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 00:20:25.887511 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 00:20:25.890739 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 00:20:25.900161 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 00:20:25.909497 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 00:20:25.909588 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Apr 17 00:20:25.909611 kernel: GPT:9289727 != 33554431 Apr 17 00:20:25.914489 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 00:20:25.916452 kernel: GPT:9289727 != 33554431 Apr 17 00:20:25.917933 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 00:20:25.917993 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 00:20:25.919721 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 00:20:25.930931 kernel: AES CTR mode by8 optimization enabled Apr 17 00:20:25.931012 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:1a:7d:d9:15:f1 Apr 17 00:20:25.936164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 00:20:25.936434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:20:25.937529 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:20:25.940574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:20:25.941875 (udev-worker)[486]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:20:25.942738 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:20:25.986382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:20:26.007118 kernel: nvme nvme0: using unchecked data buffer Apr 17 00:20:26.090873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 00:20:26.127881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 00:20:26.129836 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 00:20:26.133266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 00:20:26.158714 disk-uuid[670]: Primary Header is updated. Apr 17 00:20:26.158714 disk-uuid[670]: Secondary Entries is updated. Apr 17 00:20:26.158714 disk-uuid[670]: Secondary Header is updated. Apr 17 00:20:26.164209 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 00:20:26.169054 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 00:20:26.181225 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 00:20:26.183141 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 00:20:26.182157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:20:26.184323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 00:20:26.186743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 00:20:26.217241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 00:20:26.425973 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 00:20:26.476380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 00:20:27.190119 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 00:20:27.191314 disk-uuid[671]: The operation has completed successfully. Apr 17 00:20:27.342435 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 00:20:27.342572 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 00:20:27.373625 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 00:20:27.388166 sh[939]: Success Apr 17 00:20:27.415238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 00:20:27.415320 kernel: device-mapper: uevent: version 1.0.3 Apr 17 00:20:27.415344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 00:20:27.428118 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Apr 17 00:20:27.531961 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 00:20:27.537208 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 00:20:27.547830 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 00:20:27.571123 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (962) Apr 17 00:20:27.576114 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 17 00:20:27.576191 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:20:27.604295 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 17 00:20:27.604375 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 00:20:27.606826 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 00:20:27.609040 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 00:20:27.610157 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 00:20:27.610925 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 00:20:27.611966 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 00:20:27.614722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 00:20:27.659134 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (995) Apr 17 00:20:27.665368 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:20:27.665440 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:20:27.683151 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 00:20:27.683229 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 17 00:20:27.692210 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:20:27.691433 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 00:20:27.694726 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 00:20:27.731253 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 00:20:27.734163 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 00:20:27.769707 systemd-networkd[1131]: lo: Link UP Apr 17 00:20:27.769719 systemd-networkd[1131]: lo: Gained carrier Apr 17 00:20:27.771466 systemd-networkd[1131]: Enumeration completed Apr 17 00:20:27.771888 systemd-networkd[1131]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:20:27.771894 systemd-networkd[1131]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 00:20:27.773104 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 00:20:27.775365 systemd[1]: Reached target network.target - Network. Apr 17 00:20:27.776995 systemd-networkd[1131]: eth0: Link UP Apr 17 00:20:27.777001 systemd-networkd[1131]: eth0: Gained carrier Apr 17 00:20:27.777022 systemd-networkd[1131]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:20:27.787204 systemd-networkd[1131]: eth0: DHCPv4 address 172.31.17.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 00:20:28.096719 ignition[1086]: Ignition 2.22.0 Apr 17 00:20:28.096735 ignition[1086]: Stage: fetch-offline Apr 17 00:20:28.096962 ignition[1086]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:28.096975 ignition[1086]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:28.097445 ignition[1086]: Ignition finished successfully Apr 17 00:20:28.100071 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 00:20:28.101693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 00:20:28.132703 ignition[1141]: Ignition 2.22.0 Apr 17 00:20:28.132718 ignition[1141]: Stage: fetch Apr 17 00:20:28.133127 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:28.133140 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:28.133249 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:28.141252 ignition[1141]: PUT result: OK Apr 17 00:20:28.142904 ignition[1141]: parsed url from cmdline: "" Apr 17 00:20:28.142915 ignition[1141]: no config URL provided Apr 17 00:20:28.142924 ignition[1141]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 00:20:28.142938 ignition[1141]: no config at "/usr/lib/ignition/user.ign" Apr 17 00:20:28.142966 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:28.143514 ignition[1141]: PUT result: OK Apr 17 00:20:28.143567 ignition[1141]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 00:20:28.144200 ignition[1141]: GET result: OK Apr 17 00:20:28.144356 ignition[1141]: parsing config with SHA512: e7e641f8972a6bb7bef8c8e396a418bdbbaba77a64390d722c20f23dc08a47b6b93e46f8cd582d5e390b496c4d5c3123c05a1293d1552658910ade9753e630aa Apr 17 00:20:28.153113 unknown[1141]: fetched base config from "system" Apr 17 00:20:28.153511 unknown[1141]: fetched base config from "system" Apr 17 00:20:28.153520 unknown[1141]: fetched user config from "aws" Apr 17 00:20:28.154035 ignition[1141]: fetch: fetch complete Apr 17 00:20:28.154042 ignition[1141]: fetch: fetch passed Apr 17 00:20:28.154124 ignition[1141]: Ignition finished successfully Apr 17 00:20:28.157208 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 00:20:28.158707 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 00:20:28.193752 ignition[1147]: Ignition 2.22.0 Apr 17 00:20:28.193769 ignition[1147]: Stage: kargs Apr 17 00:20:28.194179 ignition[1147]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:28.194192 ignition[1147]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:28.194301 ignition[1147]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:28.195217 ignition[1147]: PUT result: OK Apr 17 00:20:28.197857 ignition[1147]: kargs: kargs passed Apr 17 00:20:28.197929 ignition[1147]: Ignition finished successfully Apr 17 00:20:28.199610 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 00:20:28.201437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 00:20:28.233596 ignition[1153]: Ignition 2.22.0 Apr 17 00:20:28.233612 ignition[1153]: Stage: disks Apr 17 00:20:28.233996 ignition[1153]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:28.234008 ignition[1153]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:28.234147 ignition[1153]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:28.235625 ignition[1153]: PUT result: OK Apr 17 00:20:28.238390 ignition[1153]: disks: disks passed Apr 17 00:20:28.238473 ignition[1153]: Ignition finished successfully Apr 17 00:20:28.240072 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 00:20:28.241017 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 00:20:28.241700 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 00:20:28.242032 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 00:20:28.242715 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 00:20:28.243280 systemd[1]: Reached target basic.target - Basic System. Apr 17 00:20:28.244925 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 00:20:28.295017 systemd-fsck[1162]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 17 00:20:28.298630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 00:20:28.300747 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 00:20:28.465119 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 17 00:20:28.465562 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 00:20:28.466790 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 00:20:28.468792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 00:20:28.471419 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 00:20:28.474730 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 00:20:28.475241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 00:20:28.475280 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 00:20:28.483750 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 00:20:28.485779 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 00:20:28.500115 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1181) Apr 17 00:20:28.504290 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:20:28.504355 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:20:28.513381 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 00:20:28.513461 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 17 00:20:28.516341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 00:20:28.760200 initrd-setup-root[1205]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 00:20:28.766902 initrd-setup-root[1212]: cut: /sysroot/etc/group: No such file or directory Apr 17 00:20:28.772524 initrd-setup-root[1219]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 00:20:28.777114 initrd-setup-root[1226]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 00:20:28.987368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 00:20:28.989521 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 00:20:28.991288 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 00:20:29.008361 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 00:20:29.011222 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:20:29.048792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 00:20:29.050778 ignition[1293]: INFO : Ignition 2.22.0 Apr 17 00:20:29.050778 ignition[1293]: INFO : Stage: mount Apr 17 00:20:29.050778 ignition[1293]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:29.050778 ignition[1293]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:29.050778 ignition[1293]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:29.053230 ignition[1293]: INFO : PUT result: OK Apr 17 00:20:29.054354 ignition[1293]: INFO : mount: mount passed Apr 17 00:20:29.054951 ignition[1293]: INFO : Ignition finished successfully Apr 17 00:20:29.056055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 00:20:29.057871 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 00:20:29.077048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 00:20:29.104119 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1307) Apr 17 00:20:29.109296 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:20:29.109375 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:20:29.117393 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 00:20:29.117473 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 17 00:20:29.120731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 00:20:29.152857 ignition[1323]: INFO : Ignition 2.22.0 Apr 17 00:20:29.152857 ignition[1323]: INFO : Stage: files Apr 17 00:20:29.154286 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:29.154286 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:29.154286 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:29.154286 ignition[1323]: INFO : PUT result: OK Apr 17 00:20:29.156714 ignition[1323]: DEBUG : files: compiled without relabeling support, skipping Apr 17 00:20:29.157617 ignition[1323]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 00:20:29.157617 ignition[1323]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 00:20:29.161552 ignition[1323]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 00:20:29.162377 ignition[1323]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 00:20:29.162377 ignition[1323]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 00:20:29.162030 unknown[1323]: wrote ssh authorized keys file for user: core Apr 17 00:20:29.165516 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 00:20:29.166331 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 00:20:29.243227 systemd-networkd[1131]: eth0: Gained IPv6LL Apr 17 00:20:29.259382 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 00:20:29.401463 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 00:20:29.401463 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:20:29.404415 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 00:20:29.943414 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 00:20:30.993427 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:20:30.993427 ignition[1323]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 00:20:30.997507 ignition[1323]: INFO : files: files passed Apr 17 00:20:30.997507 ignition[1323]: INFO : Ignition finished successfully Apr 17 00:20:31.000882 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 00:20:31.003195 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 00:20:31.006745 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 00:20:31.025432 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 00:20:31.025938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 00:20:31.032161 initrd-setup-root-after-ignition[1354]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:20:31.034204 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:20:31.035340 initrd-setup-root-after-ignition[1354]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:20:31.034920 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 00:20:31.036299 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 00:20:31.038059 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 00:20:31.091722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 00:20:31.091873 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 00:20:31.093119 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 00:20:31.094350 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 00:20:31.095284 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 00:20:31.096448 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 00:20:31.120988 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 00:20:31.123049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 00:20:31.141575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:20:31.142244 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:20:31.143344 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 00:20:31.144180 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 00:20:31.144404 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 00:20:31.145476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 00:20:31.146378 systemd[1]: Stopped target basic.target - Basic System. Apr 17 00:20:31.147217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 00:20:31.148046 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 00:20:31.148814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 00:20:31.149595 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 00:20:31.150351 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 00:20:31.151198 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 00:20:31.151945 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 00:20:31.153158 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 00:20:31.153901 systemd[1]: Stopped target swap.target - Swaps. Apr 17 00:20:31.154685 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 00:20:31.154866 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 00:20:31.155940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:20:31.156777 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:20:31.157453 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 00:20:31.157585 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:20:31.158257 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 00:20:31.158468 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 00:20:31.159882 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 00:20:31.160130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 00:20:31.160795 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 00:20:31.160942 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 00:20:31.162914 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 00:20:31.168355 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 00:20:31.169680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 00:20:31.170648 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:20:31.172058 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 00:20:31.172895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 00:20:31.182857 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 00:20:31.182986 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 00:20:31.201328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 00:20:31.205506 ignition[1378]: INFO : Ignition 2.22.0 Apr 17 00:20:31.205506 ignition[1378]: INFO : Stage: umount Apr 17 00:20:31.208799 ignition[1378]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:20:31.208799 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 00:20:31.208799 ignition[1378]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 00:20:31.208799 ignition[1378]: INFO : PUT result: OK Apr 17 00:20:31.211310 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 00:20:31.211477 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 00:20:31.213407 ignition[1378]: INFO : umount: umount passed Apr 17 00:20:31.213407 ignition[1378]: INFO : Ignition finished successfully Apr 17 00:20:31.215129 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 00:20:31.215270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 00:20:31.215876 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 00:20:31.215938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 00:20:31.216448 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 00:20:31.216504 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 00:20:31.217066 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 00:20:31.217219 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 00:20:31.217739 systemd[1]: Stopped target network.target - Network. Apr 17 00:20:31.218339 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 00:20:31.218398 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 00:20:31.219081 systemd[1]: Stopped target paths.target - Path Units. Apr 17 00:20:31.219662 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 00:20:31.221206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:20:31.221812 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 00:20:31.222163 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 00:20:31.222978 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 00:20:31.223031 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 00:20:31.223944 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 00:20:31.223997 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 00:20:31.224583 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 00:20:31.224657 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 00:20:31.225255 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 00:20:31.225308 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 00:20:31.225877 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 00:20:31.225934 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 00:20:31.226748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 00:20:31.227414 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 00:20:31.233036 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 00:20:31.233208 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 00:20:31.237057 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 00:20:31.237462 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 00:20:31.237614 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 00:20:31.240056 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 00:20:31.241015 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 00:20:31.241811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 00:20:31.241866 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:20:31.243652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 00:20:31.244214 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 00:20:31.244282 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 00:20:31.244880 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 00:20:31.244938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:20:31.248267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 00:20:31.248344 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 00:20:31.249260 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 00:20:31.249329 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:20:31.250234 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:20:31.253978 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 00:20:31.254070 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:20:31.264662 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 00:20:31.265897 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:20:31.267203 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 00:20:31.267299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 00:20:31.268270 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 00:20:31.268318 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:20:31.269029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 00:20:31.269117 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 00:20:31.270251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 00:20:31.270315 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 00:20:31.272924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 00:20:31.272995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 00:20:31.275989 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 00:20:31.276667 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 00:20:31.276737 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:20:31.278015 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 00:20:31.278082 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:20:31.281260 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 00:20:31.281333 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:20:31.282286 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 00:20:31.282348 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:20:31.283321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 00:20:31.283383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:20:31.288424 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 17 00:20:31.288517 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 17 00:20:31.288569 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 17 00:20:31.288623 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:20:31.289169 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 00:20:31.291172 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 00:20:31.297165 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 00:20:31.297313 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 00:20:31.298576 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 00:20:31.300047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 00:20:31.314959 systemd[1]: Switching root. Apr 17 00:20:31.352909 systemd-journald[188]: Journal stopped Apr 17 00:20:34.255738 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 17 00:20:34.255820 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 00:20:34.255845 kernel: SELinux: policy capability open_perms=1 Apr 17 00:20:34.255863 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 00:20:34.255886 kernel: SELinux: policy capability always_check_network=0 Apr 17 00:20:34.255903 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 00:20:34.255925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 00:20:34.255946 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 00:20:34.255965 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 00:20:34.255983 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 00:20:34.256000 kernel: audit: type=1403 audit(1776385232.969:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 00:20:34.256030 systemd[1]: Successfully loaded SELinux policy in 61.070ms. Apr 17 00:20:34.256063 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.613ms. Apr 17 00:20:34.256155 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 00:20:34.256177 systemd[1]: Detected virtualization amazon. Apr 17 00:20:34.256198 systemd[1]: Detected architecture x86-64. Apr 17 00:20:34.256216 systemd[1]: Detected first boot. Apr 17 00:20:34.256233 systemd[1]: Initializing machine ID from VM UUID. Apr 17 00:20:34.256250 zram_generator::config[1423]: No configuration found. Apr 17 00:20:34.256269 kernel: Guest personality initialized and is inactive Apr 17 00:20:34.256286 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 00:20:34.256303 kernel: Initialized host personality Apr 17 00:20:34.256322 kernel: NET: Registered PF_VSOCK protocol family Apr 17 00:20:34.256342 systemd[1]: Populated /etc with preset unit settings. Apr 17 00:20:34.256368 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 00:20:34.256386 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 00:20:34.256405 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 00:20:34.256424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 00:20:34.256442 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 00:20:34.256467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 00:20:34.256485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 00:20:34.256853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 00:20:34.256886 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 00:20:34.256912 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 00:20:34.256938 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 00:20:34.256963 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 00:20:34.256988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:20:34.257013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:20:34.257037 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 00:20:34.257061 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 00:20:34.257108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 00:20:34.257133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 00:20:34.257152 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 00:20:34.257171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:20:34.257192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:20:34.257212 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 00:20:34.257232 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 00:20:34.257253 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 00:20:34.257274 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 00:20:34.257297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:20:34.257318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 00:20:34.257338 systemd[1]: Reached target slices.target - Slice Units. Apr 17 00:20:34.257358 systemd[1]: Reached target swap.target - Swaps. Apr 17 00:20:34.257378 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 00:20:34.257399 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 00:20:34.257420 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 00:20:34.257440 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:20:34.257460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 00:20:34.257484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:20:34.257506 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 00:20:34.257526 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 00:20:34.257546 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 00:20:34.257566 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 00:20:34.257587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:20:34.257608 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 00:20:34.257628 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 00:20:34.257648 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 00:20:34.257673 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 00:20:34.257739 systemd[1]: Reached target machines.target - Containers. Apr 17 00:20:34.257758 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 00:20:34.257782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:20:34.257802 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 00:20:34.257820 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 00:20:34.257838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:20:34.257856 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 00:20:34.257879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:20:34.257900 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 00:20:34.257920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:20:34.257941 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 00:20:34.257961 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 00:20:34.257985 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 00:20:34.258003 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 00:20:34.258023 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 00:20:34.258051 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:20:34.258073 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 00:20:34.258110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 00:20:34.258130 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 00:20:34.258156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 00:20:34.258175 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 00:20:34.258194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 00:20:34.258217 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 00:20:34.258236 systemd[1]: Stopped verity-setup.service. Apr 17 00:20:34.258254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:20:34.258276 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 00:20:34.258297 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 00:20:34.258315 kernel: fuse: init (API version 7.41) Apr 17 00:20:34.258333 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 00:20:34.258351 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 00:20:34.258372 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 00:20:34.258391 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 00:20:34.258409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:20:34.258430 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 00:20:34.258451 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 00:20:34.258470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:20:34.258491 kernel: ACPI: bus type drm_connector registered Apr 17 00:20:34.258520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:20:34.258542 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 00:20:34.258563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 00:20:34.258585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:20:34.258607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:20:34.258629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 00:20:34.258652 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 00:20:34.258672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 00:20:34.258694 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:20:34.258716 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 00:20:34.258779 systemd-journald[1502]: Collecting audit messages is disabled. Apr 17 00:20:34.258820 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 00:20:34.258841 kernel: loop: module loaded Apr 17 00:20:34.258862 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:20:34.258886 systemd-journald[1502]: Journal started Apr 17 00:20:34.258926 systemd-journald[1502]: Runtime Journal (/run/log/journal/ec2531c93d05e387c944df81f30ce3d0) is 4.7M, max 38.1M, 33.3M free. Apr 17 00:20:33.829899 systemd[1]: Queued start job for default target multi-user.target. Apr 17 00:20:33.841390 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 00:20:33.841820 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 00:20:34.265120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:20:34.268119 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 00:20:34.281181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 00:20:34.291158 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 00:20:34.296220 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 00:20:34.299279 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 00:20:34.301930 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 00:20:34.301992 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 00:20:34.306081 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 00:20:34.316299 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 00:20:34.317728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:20:34.323370 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 00:20:34.329421 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 00:20:34.330739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:20:34.333264 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 00:20:34.334021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 00:20:34.336045 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 00:20:34.343203 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 00:20:34.347811 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 00:20:34.351714 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 00:20:34.352575 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 00:20:34.367353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:20:34.373490 systemd-journald[1502]: Time spent on flushing to /var/log/journal/ec2531c93d05e387c944df81f30ce3d0 is 119.741ms for 1021 entries. Apr 17 00:20:34.373490 systemd-journald[1502]: System Journal (/var/log/journal/ec2531c93d05e387c944df81f30ce3d0) is 8M, max 195.6M, 187.6M free. Apr 17 00:20:34.507066 systemd-journald[1502]: Received client request to flush runtime journal. Apr 17 00:20:34.507179 kernel: loop0: detected capacity change from 0 to 110984 Apr 17 00:20:34.384930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 00:20:34.388721 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 00:20:34.396288 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 00:20:34.439272 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Apr 17 00:20:34.439295 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Apr 17 00:20:34.451968 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:20:34.459334 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 00:20:34.470532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:20:34.509718 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 00:20:34.534606 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 00:20:34.538224 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 00:20:34.557225 kernel: loop1: detected capacity change from 0 to 219192 Apr 17 00:20:34.574059 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 00:20:34.579241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 00:20:34.613489 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Apr 17 00:20:34.613895 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Apr 17 00:20:34.619738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:20:34.830231 kernel: loop2: detected capacity change from 0 to 72368 Apr 17 00:20:34.843770 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 00:20:34.965125 kernel: loop3: detected capacity change from 0 to 128560 Apr 17 00:20:35.094120 kernel: loop4: detected capacity change from 0 to 110984 Apr 17 00:20:35.124120 kernel: loop5: detected capacity change from 0 to 219192 Apr 17 00:20:35.173123 kernel: loop6: detected capacity change from 0 to 72368 Apr 17 00:20:35.218111 kernel: loop7: detected capacity change from 0 to 128560 Apr 17 00:20:35.220497 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 00:20:35.223815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:20:35.241058 (sd-merge)[1586]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 00:20:35.242315 (sd-merge)[1586]: Merged extensions into '/usr'. Apr 17 00:20:35.253823 systemd[1]: Reload requested from client PID 1557 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 00:20:35.254010 systemd[1]: Reloading... Apr 17 00:20:35.281641 systemd-udevd[1588]: Using default interface naming scheme 'v255'. Apr 17 00:20:35.360146 zram_generator::config[1614]: No configuration found. Apr 17 00:20:35.653855 (udev-worker)[1639]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:20:35.701226 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 00:20:35.755122 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 00:20:35.812117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 00:20:35.848125 kernel: ACPI: button: Power Button [PWRF] Apr 17 00:20:35.852128 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 17 00:20:35.867113 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 00:20:35.906695 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 00:20:35.906963 systemd[1]: Reloading finished in 652 ms. Apr 17 00:20:35.920853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:20:35.923321 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 00:20:35.931445 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 00:20:35.932169 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 00:20:35.952757 systemd[1]: Starting ensure-sysext.service... Apr 17 00:20:35.956859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 00:20:35.960333 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 00:20:35.981586 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 00:20:35.998231 systemd[1]: Reload requested from client PID 1778 ('systemctl') (unit ensure-sysext.service)... Apr 17 00:20:35.998251 systemd[1]: Reloading... Apr 17 00:20:36.055295 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 00:20:36.055334 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 00:20:36.055704 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 00:20:36.056151 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 00:20:36.058155 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 00:20:36.058800 systemd-tmpfiles[1781]: ACLs are not supported, ignoring. Apr 17 00:20:36.058998 systemd-tmpfiles[1781]: ACLs are not supported, ignoring. Apr 17 00:20:36.068062 systemd-tmpfiles[1781]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:20:36.068342 systemd-tmpfiles[1781]: Skipping /boot Apr 17 00:20:36.084465 systemd-tmpfiles[1781]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:20:36.084616 systemd-tmpfiles[1781]: Skipping /boot Apr 17 00:20:36.158961 zram_generator::config[1820]: No configuration found. Apr 17 00:20:36.336891 systemd-networkd[1780]: lo: Link UP Apr 17 00:20:36.337302 systemd-networkd[1780]: lo: Gained carrier Apr 17 00:20:36.339235 systemd-networkd[1780]: Enumeration completed Apr 17 00:20:36.339800 systemd-networkd[1780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:20:36.339929 systemd-networkd[1780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 00:20:36.342478 systemd-networkd[1780]: eth0: Link UP Apr 17 00:20:36.342780 systemd-networkd[1780]: eth0: Gained carrier Apr 17 00:20:36.342895 systemd-networkd[1780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:20:36.355271 systemd-networkd[1780]: eth0: DHCPv4 address 172.31.17.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 00:20:36.592102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 00:20:36.593027 systemd[1]: Reloading finished in 594 ms. Apr 17 00:20:36.605920 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 00:20:36.606735 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 00:20:36.621496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:20:36.671344 systemd[1]: Finished ensure-sysext.service. Apr 17 00:20:36.674468 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:20:36.675727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 00:20:36.682269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 00:20:36.683317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:20:36.686564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:20:36.690558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 00:20:36.693310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:20:36.696314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:20:36.697131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:20:36.699953 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 00:20:36.700722 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:20:36.704254 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 00:20:36.707329 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 00:20:36.714503 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 00:20:36.724411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 00:20:36.727204 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 00:20:36.733354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 00:20:36.745389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:20:36.746006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:20:36.748141 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:20:36.748436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:20:36.761410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:20:36.761700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:20:36.768010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:20:36.781774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 00:20:36.784768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:20:36.785008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:20:36.786762 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 00:20:36.787028 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 00:20:36.797787 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 00:20:36.800330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 00:20:36.821402 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 00:20:36.858423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 00:20:36.859363 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 00:20:36.869761 augenrules[1929]: No rules Apr 17 00:20:36.870011 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 00:20:36.871434 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 00:20:36.871735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 00:20:36.875771 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 00:20:36.899745 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 00:20:36.917997 systemd-resolved[1896]: Positive Trust Anchors: Apr 17 00:20:36.918017 systemd-resolved[1896]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 00:20:36.918079 systemd-resolved[1896]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 00:20:36.925297 systemd-resolved[1896]: Defaulting to hostname 'linux'. Apr 17 00:20:36.928562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 00:20:36.929254 systemd[1]: Reached target network.target - Network. Apr 17 00:20:36.929712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:20:36.940124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:20:36.940839 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 00:20:36.941414 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 00:20:36.941839 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 00:20:36.942269 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 00:20:36.942875 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 00:20:36.943336 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 00:20:36.943695 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 00:20:36.944038 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 00:20:36.944080 systemd[1]: Reached target paths.target - Path Units. Apr 17 00:20:36.944560 systemd[1]: Reached target timers.target - Timer Units. Apr 17 00:20:36.946062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 00:20:36.947891 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 00:20:36.950736 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 00:20:36.951319 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 00:20:36.951713 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 00:20:36.955607 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 00:20:36.956429 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 00:20:36.957560 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 00:20:36.959026 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 00:20:36.959451 systemd[1]: Reached target basic.target - Basic System. Apr 17 00:20:36.959870 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 00:20:36.959912 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 00:20:36.960959 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 00:20:36.963472 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 00:20:36.966863 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 00:20:36.970343 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 00:20:36.977722 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 00:20:36.982280 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 00:20:36.982922 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 00:20:36.986636 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 00:20:36.989565 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 00:20:36.997930 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 00:20:37.001018 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 00:20:37.010365 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 00:20:37.033630 jq[1947]: false Apr 17 00:20:37.036390 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 00:20:37.042369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 00:20:37.051411 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 00:20:37.054399 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 00:20:37.055515 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 00:20:37.057871 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 00:20:37.060364 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 00:20:37.087816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 00:20:37.089680 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 00:20:37.089939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 00:20:37.101446 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Refreshing passwd entry cache Apr 17 00:20:37.101464 oslogin_cache_refresh[1949]: Refreshing passwd entry cache Apr 17 00:20:37.123008 extend-filesystems[1948]: Found /dev/nvme0n1p6 Apr 17 00:20:37.153312 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Failure getting users, quitting Apr 17 00:20:37.153312 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 00:20:37.153312 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Refreshing group entry cache Apr 17 00:20:37.151944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 00:20:37.153605 tar[1967]: linux-amd64/LICENSE Apr 17 00:20:37.153605 tar[1967]: linux-amd64/helm Apr 17 00:20:37.153914 jq[1964]: true Apr 17 00:20:37.141509 oslogin_cache_refresh[1949]: Failure getting users, quitting Apr 17 00:20:37.141533 oslogin_cache_refresh[1949]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 00:20:37.141589 oslogin_cache_refresh[1949]: Refreshing group entry cache Apr 17 00:20:37.154269 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 00:20:37.155784 oslogin_cache_refresh[1949]: Failure getting groups, quitting Apr 17 00:20:37.160325 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Failure getting groups, quitting Apr 17 00:20:37.160325 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 00:20:37.158027 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 00:20:37.155800 oslogin_cache_refresh[1949]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: ---------------------------------------------------- Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: corporation. Support and training for ntp-4 are Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: available at https://www.nwtime.org/support Apr 17 00:20:37.164532 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: ---------------------------------------------------- Apr 17 00:20:37.163872 ntpd[1951]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 17 00:20:37.182529 jq[1984]: true Apr 17 00:20:37.182823 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: proto: precision = 0.065 usec (-24) Apr 17 00:20:37.182823 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: basedate set to 2026-04-04 Apr 17 00:20:37.182823 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: gps base set to 2026-04-05 (week 2413) Apr 17 00:20:37.182823 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 00:20:37.182823 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 00:20:37.182991 extend-filesystems[1948]: Found /dev/nvme0n1p9 Apr 17 00:20:37.182991 extend-filesystems[1948]: Checking size of /dev/nvme0n1p9 Apr 17 00:20:37.251944 kernel: ntpd[1951]: segfault at 24 ip 00005618bd6a5aeb sp 00007ffdc7dd8e40 error 4 in ntpd[68aeb,5618bd643000+80000] likely on CPU 1 (core 0, socket 0) Apr 17 00:20:37.251986 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Apr 17 00:20:37.252012 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 00:20:37.252036 update_engine[1963]: I20260417 00:20:37.174898 1963 main.cc:92] Flatcar Update Engine starting Apr 17 00:20:37.180512 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 00:20:37.163940 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.204 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.207 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.217 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.217 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.218 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.218 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.220 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.221 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.226 INFO Fetch failed with 404: resource not found Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.226 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.231 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.231 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.234 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.234 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.237 INFO Fetch successful Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.237 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 00:20:37.255653 coreos-metadata[1944]: Apr 17 00:20:37.239 INFO Fetch successful Apr 17 00:20:37.260852 extend-filesystems[1948]: Resized partition /dev/nvme0n1p9 Apr 17 00:20:37.274282 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 00:20:37.274282 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Listen normally on 3 eth0 172.31.17.163:123 Apr 17 00:20:37.274282 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: Listen normally on 4 lo [::1]:123 Apr 17 00:20:37.274282 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: bind(21) AF_INET6 [fe80::41a:7dff:fed9:15f1%2]:123 flags 0x811 failed: Cannot assign requested address Apr 17 00:20:37.274282 ntpd[1951]: 17 Apr 00:20:37 ntpd[1951]: unable to create socket on eth0 (5) for [fe80::41a:7dff:fed9:15f1%2]:123 Apr 17 00:20:37.196592 systemd-coredump[1991]: Process 1951 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Apr 17 00:20:37.163952 ntpd[1951]: ---------------------------------------------------- Apr 17 00:20:37.290202 extend-filesystems[1998]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 00:20:37.202064 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 00:20:37.163962 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Apr 17 00:20:37.298137 update_engine[1963]: I20260417 00:20:37.292454 1963 update_check_scheduler.cc:74] Next update check in 3m7s Apr 17 00:20:37.203731 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 00:20:37.163972 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 00:20:37.271829 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Apr 17 00:20:37.163982 ntpd[1951]: corporation. Support and training for ntp-4 are Apr 17 00:20:37.280538 systemd[1]: Started systemd-coredump@0-1991-0.service - Process Core Dump (PID 1991/UID 0). Apr 17 00:20:37.163991 ntpd[1951]: available at https://www.nwtime.org/support Apr 17 00:20:37.282062 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 00:20:37.164000 ntpd[1951]: ---------------------------------------------------- Apr 17 00:20:37.289584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 00:20:37.177101 ntpd[1951]: proto: precision = 0.065 usec (-24) Apr 17 00:20:37.289616 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 00:20:37.182556 ntpd[1951]: basedate set to 2026-04-04 Apr 17 00:20:37.292816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 00:20:37.182579 ntpd[1951]: gps base set to 2026-04-05 (week 2413) Apr 17 00:20:37.292851 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 00:20:37.182731 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 00:20:37.293591 (ntainerd)[2002]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 00:20:37.182762 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 00:20:37.184279 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 00:20:37.184312 ntpd[1951]: Listen normally on 3 eth0 172.31.17.163:123 Apr 17 00:20:37.184344 ntpd[1951]: Listen normally on 4 lo [::1]:123 Apr 17 00:20:37.184372 ntpd[1951]: bind(21) AF_INET6 [fe80::41a:7dff:fed9:15f1%2]:123 flags 0x811 failed: Cannot assign requested address Apr 17 00:20:37.184391 ntpd[1951]: unable to create socket on eth0 (5) for [fe80::41a:7dff:fed9:15f1%2]:123 Apr 17 00:20:37.272311 dbus-daemon[1945]: [system] SELinux support is enabled Apr 17 00:20:37.287330 dbus-daemon[1945]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1780 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 00:20:37.328628 systemd[1]: Started update-engine.service - Update Engine. Apr 17 00:20:37.344280 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 00:20:37.344880 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 00:20:37.354853 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 00:20:37.371255 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 00:20:37.435241 systemd-networkd[1780]: eth0: Gained IPv6LL Apr 17 00:20:37.440880 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 00:20:37.448733 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 00:20:37.449693 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 00:20:37.451411 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 00:20:37.473308 extend-filesystems[1998]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 00:20:37.473308 extend-filesystems[1998]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 00:20:37.473308 extend-filesystems[1998]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 00:20:37.452379 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 00:20:37.530413 bash[2033]: Updated "/home/core/.ssh/authorized_keys" Apr 17 00:20:37.530562 extend-filesystems[1948]: Resized filesystem in /dev/nvme0n1p9 Apr 17 00:20:37.471283 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 00:20:37.476336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:20:37.485505 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 00:20:37.495141 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 00:20:37.495451 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 00:20:37.521017 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 00:20:37.536408 systemd[1]: Starting sshkeys.service... Apr 17 00:20:37.635866 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 00:20:37.638256 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 00:20:37.657760 systemd-logind[1959]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 00:20:37.657790 systemd-logind[1959]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 17 00:20:37.657816 systemd-logind[1959]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 00:20:37.661601 systemd-logind[1959]: New seat seat0. Apr 17 00:20:37.674810 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 00:20:37.757649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 00:20:37.878726 coreos-metadata[2048]: Apr 17 00:20:37.877 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 00:20:37.883848 coreos-metadata[2048]: Apr 17 00:20:37.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 00:20:37.884928 coreos-metadata[2048]: Apr 17 00:20:37.884 INFO Fetch successful Apr 17 00:20:37.885014 coreos-metadata[2048]: Apr 17 00:20:37.884 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 00:20:37.886628 coreos-metadata[2048]: Apr 17 00:20:37.885 INFO Fetch successful Apr 17 00:20:37.895876 unknown[2048]: wrote ssh authorized keys file for user: core Apr 17 00:20:37.921368 amazon-ssm-agent[2036]: Initializing new seelog logger Apr 17 00:20:37.924416 amazon-ssm-agent[2036]: New Seelog Logger Creation Complete Apr 17 00:20:37.924416 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.924416 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.924416 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 processing appconfig overrides Apr 17 00:20:37.924702 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.925109 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.925109 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 processing appconfig overrides Apr 17 00:20:37.925300 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.925349 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.925483 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 processing appconfig overrides Apr 17 00:20:37.925973 amazon-ssm-agent[2036]: 2026-04-17 00:20:37.9241 INFO Proxy environment variables: Apr 17 00:20:37.929412 locksmithd[2014]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 00:20:37.933800 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.933800 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:37.933800 amazon-ssm-agent[2036]: 2026/04/17 00:20:37 processing appconfig overrides Apr 17 00:20:37.934835 systemd-coredump[2006]: Process 1951 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1951: #0 0x00005618bd6a5aeb n/a (ntpd + 0x68aeb) #1 0x00005618bd64ecdf n/a (ntpd + 0x11cdf) #2 0x00005618bd64f575 n/a (ntpd + 0x12575) #3 0x00005618bd64ad8a n/a (ntpd + 0xdd8a) #4 0x00005618bd64c5d3 n/a (ntpd + 0xf5d3) #5 0x00005618bd654fd1 n/a (ntpd + 0x17fd1) #6 0x00005618bd645c2d n/a (ntpd + 0x8c2d) #7 0x00007f7f88f5f16c n/a (libc.so.6 + 0x2716c) #8 0x00007f7f88f5f229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005618bd645c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Apr 17 00:20:37.942776 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Apr 17 00:20:37.942990 systemd[1]: ntpd.service: Failed with result 'core-dump'. Apr 17 00:20:37.950541 systemd[1]: systemd-coredump@0-1991-0.service: Deactivated successfully. Apr 17 00:20:37.988373 update-ssh-keys[2101]: Updated "/home/core/.ssh/authorized_keys" Apr 17 00:20:37.991481 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 00:20:37.996020 systemd[1]: Finished sshkeys.service. Apr 17 00:20:38.022925 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 00:20:38.025163 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 00:20:38.027250 amazon-ssm-agent[2036]: 2026-04-17 00:20:37.9246 INFO https_proxy: Apr 17 00:20:38.028972 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2019 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 00:20:38.036496 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 00:20:38.068530 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Apr 17 00:20:38.072145 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 00:20:38.134079 amazon-ssm-agent[2036]: 2026-04-17 00:20:37.9246 INFO http_proxy: Apr 17 00:20:38.134666 ntpd[2136]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 17 00:20:38.134735 ntpd[2136]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: ---------------------------------------------------- Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: ntp-4 is maintained by Network Time Foundation, Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: corporation. Support and training for ntp-4 are Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: available at https://www.nwtime.org/support Apr 17 00:20:38.135063 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: ---------------------------------------------------- Apr 17 00:20:38.134746 ntpd[2136]: ---------------------------------------------------- Apr 17 00:20:38.134756 ntpd[2136]: ntp-4 is maintained by Network Time Foundation, Apr 17 00:20:38.134765 ntpd[2136]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 00:20:38.134775 ntpd[2136]: corporation. Support and training for ntp-4 are Apr 17 00:20:38.134785 ntpd[2136]: available at https://www.nwtime.org/support Apr 17 00:20:38.134794 ntpd[2136]: ---------------------------------------------------- Apr 17 00:20:38.139616 ntpd[2136]: proto: precision = 0.064 usec (-24) Apr 17 00:20:38.140770 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: proto: precision = 0.064 usec (-24) Apr 17 00:20:38.140770 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: basedate set to 2026-04-04 Apr 17 00:20:38.140770 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: gps base set to 2026-04-05 (week 2413) Apr 17 00:20:38.140770 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 00:20:38.140770 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 00:20:38.139894 ntpd[2136]: basedate set to 2026-04-04 Apr 17 00:20:38.139909 ntpd[2136]: gps base set to 2026-04-05 (week 2413) Apr 17 00:20:38.140004 ntpd[2136]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 00:20:38.140031 ntpd[2136]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 00:20:38.147776 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 00:20:38.147776 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listen normally on 3 eth0 172.31.17.163:123 Apr 17 00:20:38.147776 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listen normally on 4 lo [::1]:123 Apr 17 00:20:38.147776 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listen normally on 5 eth0 [fe80::41a:7dff:fed9:15f1%2]:123 Apr 17 00:20:38.147776 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: Listening on routing socket on fd #22 for interface updates Apr 17 00:20:38.142271 ntpd[2136]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 00:20:38.142308 ntpd[2136]: Listen normally on 3 eth0 172.31.17.163:123 Apr 17 00:20:38.142340 ntpd[2136]: Listen normally on 4 lo [::1]:123 Apr 17 00:20:38.142366 ntpd[2136]: Listen normally on 5 eth0 [fe80::41a:7dff:fed9:15f1%2]:123 Apr 17 00:20:38.142392 ntpd[2136]: Listening on routing socket on fd #22 for interface updates Apr 17 00:20:38.149167 ntpd[2136]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 00:20:38.149203 ntpd[2136]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 00:20:38.149358 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 00:20:38.149358 ntpd[2136]: 17 Apr 00:20:38 ntpd[2136]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 00:20:38.235109 amazon-ssm-agent[2036]: 2026-04-17 00:20:37.9246 INFO no_proxy: Apr 17 00:20:38.243206 containerd[2002]: time="2026-04-17T00:20:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 00:20:38.249911 containerd[2002]: time="2026-04-17T00:20:38.249862478Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 00:20:38.290850 sshd_keygen[2000]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 00:20:38.337109 amazon-ssm-agent[2036]: 2026-04-17 00:20:37.9248 INFO Checking if agent identity type OnPrem can be assumed Apr 17 00:20:38.339111 containerd[2002]: time="2026-04-17T00:20:38.338043627Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.81µs" Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341015094Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341112723Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341296146Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341323271Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341356653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341423094Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341439168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341712014Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341734042Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341748744Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341760746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 00:20:38.342465 containerd[2002]: time="2026-04-17T00:20:38.341859642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 00:20:38.343739 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 00:20:38.351628 containerd[2002]: time="2026-04-17T00:20:38.351566129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 00:20:38.351740 containerd[2002]: time="2026-04-17T00:20:38.351674419Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 00:20:38.351740 containerd[2002]: time="2026-04-17T00:20:38.351692504Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 00:20:38.351740 containerd[2002]: time="2026-04-17T00:20:38.351730051Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 00:20:38.352063 containerd[2002]: time="2026-04-17T00:20:38.352030966Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 00:20:38.352832 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 00:20:38.355825 containerd[2002]: time="2026-04-17T00:20:38.353778417Z" level=info msg="metadata content store policy set" policy=shared Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359798594Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359866755Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359888350Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359905414Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359924571Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359948259Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359965984Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359981986Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.359997194Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.360011182Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.360024268Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.360042888Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.360242830Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 00:20:38.361368 containerd[2002]: time="2026-04-17T00:20:38.360273964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360301597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360318433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360336875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360350574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360367535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360381938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360419452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360433852Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360451117Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360507476Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360524102Z" level=info msg="Start snapshots syncer" Apr 17 00:20:38.361899 containerd[2002]: time="2026-04-17T00:20:38.360570082Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 00:20:38.362372 containerd[2002]: time="2026-04-17T00:20:38.360972253Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 00:20:38.362372 containerd[2002]: time="2026-04-17T00:20:38.361041470Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364135244Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364337471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364371612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364391066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364406790Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364429315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364445503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364461077Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364507388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364525987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 00:20:38.366130 containerd[2002]: time="2026-04-17T00:20:38.364542504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367287328Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367383782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367399402Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367413310Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367425292Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367438995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367462324Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367483255Z" level=info msg="runtime interface created" Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367492382Z" level=info msg="created NRI interface" Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367511265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367531419Z" level=info msg="Connect containerd service" Apr 17 00:20:38.367808 containerd[2002]: time="2026-04-17T00:20:38.367567721Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 00:20:38.373148 containerd[2002]: time="2026-04-17T00:20:38.372292893Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 00:20:38.429598 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 00:20:38.429888 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 00:20:38.451996 amazon-ssm-agent[2036]: 2026-04-17 00:20:37.9250 INFO Checking if agent identity type EC2 can be assumed Apr 17 00:20:38.452993 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 00:20:38.456845 polkitd[2131]: Started polkitd version 126 Apr 17 00:20:38.531001 polkitd[2131]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 00:20:38.531554 polkitd[2131]: Loading rules from directory /run/polkit-1/rules.d Apr 17 00:20:38.531612 polkitd[2131]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 17 00:20:38.532024 polkitd[2131]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 17 00:20:38.532054 polkitd[2131]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 17 00:20:38.537970 polkitd[2131]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 00:20:38.539243 polkitd[2131]: Finished loading, compiling and executing 2 rules Apr 17 00:20:38.540026 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 00:20:38.546313 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 00:20:38.548633 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1338 INFO Agent will take identity from EC2 Apr 17 00:20:38.549146 polkitd[2131]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 00:20:38.569468 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 00:20:38.578822 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 00:20:38.584409 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 00:20:38.585430 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 00:20:38.649980 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1379 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Apr 17 00:20:38.687559 systemd-resolved[1896]: System hostname changed to 'ip-172-31-17-163'. Apr 17 00:20:38.688007 systemd-hostnamed[2019]: Hostname set to (transient) Apr 17 00:20:38.751110 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1379 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 00:20:38.759122 tar[1967]: linux-amd64/README.md Apr 17 00:20:38.783310 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 00:20:38.808896 containerd[2002]: time="2026-04-17T00:20:38.808790945Z" level=info msg="Start subscribing containerd event" Apr 17 00:20:38.809099 containerd[2002]: time="2026-04-17T00:20:38.809054935Z" level=info msg="Start recovering state" Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809284225Z" level=info msg="Start event monitor" Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809305492Z" level=info msg="Start cni network conf syncer for default" Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809315647Z" level=info msg="Start streaming server" Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809327095Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809336681Z" level=info msg="runtime interface starting up..." Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809345079Z" level=info msg="starting plugins..." Apr 17 00:20:38.809457 containerd[2002]: time="2026-04-17T00:20:38.809359038Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 00:20:38.810266 containerd[2002]: time="2026-04-17T00:20:38.810239152Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 00:20:38.810456 containerd[2002]: time="2026-04-17T00:20:38.810349435Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 00:20:38.810954 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 00:20:38.812499 containerd[2002]: time="2026-04-17T00:20:38.812474323Z" level=info msg="containerd successfully booted in 0.569792s" Apr 17 00:20:38.848449 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1379 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 00:20:38.948287 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1379 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Apr 17 00:20:38.949487 amazon-ssm-agent[2036]: 2026/04/17 00:20:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:38.949487 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 00:20:38.949642 amazon-ssm-agent[2036]: 2026/04/17 00:20:38 processing appconfig overrides Apr 17 00:20:38.974415 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1379 INFO [Registrar] Starting registrar module Apr 17 00:20:38.974415 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1450 INFO [EC2Identity] Checking disk for registration info Apr 17 00:20:38.974415 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1450 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Apr 17 00:20:38.974415 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.1450 INFO [EC2Identity] Generating registration keypair Apr 17 00:20:38.974415 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9141 INFO [EC2Identity] Checking write access before registering Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9144 INFO [EC2Identity] Registering EC2 instance with Systems Manager Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9492 INFO [EC2Identity] EC2 registration was successful. Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9493 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9494 INFO [CredentialRefresher] credentialRefresher has started Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9494 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9741 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 00:20:38.974767 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9743 INFO [CredentialRefresher] Credentials ready Apr 17 00:20:39.048436 amazon-ssm-agent[2036]: 2026-04-17 00:20:38.9746 INFO [CredentialRefresher] Next credential rotation will be in 29.999991976033332 minutes Apr 17 00:20:39.910021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 00:20:39.912032 systemd[1]: Started sshd@0-172.31.17.163:22-50.85.169.122:47108.service - OpenSSH per-connection server daemon (50.85.169.122:47108). Apr 17 00:20:39.986866 amazon-ssm-agent[2036]: 2026-04-17 00:20:39.9864 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 00:20:40.088183 amazon-ssm-agent[2036]: 2026-04-17 00:20:39.9885 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2228) started Apr 17 00:20:40.189474 amazon-ssm-agent[2036]: 2026-04-17 00:20:39.9885 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 00:20:40.815627 sshd[2223]: Accepted publickey for core from 50.85.169.122 port 47108 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:40.817884 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:40.835356 systemd-logind[1959]: New session 1 of user core. Apr 17 00:20:40.836922 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 00:20:40.839858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 00:20:40.868970 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 00:20:40.872858 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 00:20:40.886332 (systemd)[2242]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 00:20:40.889066 systemd-logind[1959]: New session c1 of user core. Apr 17 00:20:41.052573 systemd[2242]: Queued start job for default target default.target. Apr 17 00:20:41.059522 systemd[2242]: Created slice app.slice - User Application Slice. Apr 17 00:20:41.059565 systemd[2242]: Reached target paths.target - Paths. Apr 17 00:20:41.059630 systemd[2242]: Reached target timers.target - Timers. Apr 17 00:20:41.061189 systemd[2242]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 00:20:41.073928 systemd[2242]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 00:20:41.074021 systemd[2242]: Reached target sockets.target - Sockets. Apr 17 00:20:41.074351 systemd[2242]: Reached target basic.target - Basic System. Apr 17 00:20:41.074424 systemd[2242]: Reached target default.target - Main User Target. Apr 17 00:20:41.074463 systemd[2242]: Startup finished in 177ms. Apr 17 00:20:41.074760 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 00:20:41.083388 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 00:20:41.599492 systemd[1]: Started sshd@1-172.31.17.163:22-50.85.169.122:47114.service - OpenSSH per-connection server daemon (50.85.169.122:47114). Apr 17 00:20:42.203659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:20:42.206425 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 00:20:42.208080 systemd[1]: Startup finished in 2.627s (kernel) + 8.332s (initrd) + 9.297s (userspace) = 20.257s. Apr 17 00:20:42.218733 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:20:42.469162 sshd[2253]: Accepted publickey for core from 50.85.169.122 port 47114 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:42.471860 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:42.478696 systemd-logind[1959]: New session 2 of user core. Apr 17 00:20:42.485376 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 00:20:42.969673 sshd[2266]: Connection closed by 50.85.169.122 port 47114 Apr 17 00:20:42.970727 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Apr 17 00:20:42.976887 systemd[1]: sshd@1-172.31.17.163:22-50.85.169.122:47114.service: Deactivated successfully. Apr 17 00:20:42.978978 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 00:20:42.980199 systemd-logind[1959]: Session 2 logged out. Waiting for processes to exit. Apr 17 00:20:42.983379 systemd-logind[1959]: Removed session 2. Apr 17 00:20:43.138707 systemd[1]: Started sshd@2-172.31.17.163:22-50.85.169.122:47128.service - OpenSSH per-connection server daemon (50.85.169.122:47128). Apr 17 00:20:43.858927 kubelet[2261]: E0417 00:20:43.858854 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:20:43.861639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:20:43.861861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:20:43.862602 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 256.5M memory peak. Apr 17 00:20:43.989503 sshd[2276]: Accepted publickey for core from 50.85.169.122 port 47128 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:43.990978 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:43.996153 systemd-logind[1959]: New session 3 of user core. Apr 17 00:20:44.003322 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 00:20:44.472889 sshd[2281]: Connection closed by 50.85.169.122 port 47128 Apr 17 00:20:44.474930 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Apr 17 00:20:44.479422 systemd-logind[1959]: Session 3 logged out. Waiting for processes to exit. Apr 17 00:20:44.479755 systemd[1]: sshd@2-172.31.17.163:22-50.85.169.122:47128.service: Deactivated successfully. Apr 17 00:20:44.481875 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 00:20:44.483912 systemd-logind[1959]: Removed session 3. Apr 17 00:20:44.654341 systemd[1]: Started sshd@3-172.31.17.163:22-50.85.169.122:47130.service - OpenSSH per-connection server daemon (50.85.169.122:47130). Apr 17 00:20:46.407109 systemd-resolved[1896]: Clock change detected. Flushing caches. Apr 17 00:20:46.801140 sshd[2287]: Accepted publickey for core from 50.85.169.122 port 47130 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:46.802606 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:46.807784 systemd-logind[1959]: New session 4 of user core. Apr 17 00:20:46.817008 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 00:20:47.303193 sshd[2290]: Connection closed by 50.85.169.122 port 47130 Apr 17 00:20:47.303966 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Apr 17 00:20:47.308282 systemd[1]: sshd@3-172.31.17.163:22-50.85.169.122:47130.service: Deactivated successfully. Apr 17 00:20:47.310614 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 00:20:47.311804 systemd-logind[1959]: Session 4 logged out. Waiting for processes to exit. Apr 17 00:20:47.313215 systemd-logind[1959]: Removed session 4. Apr 17 00:20:47.477030 systemd[1]: Started sshd@4-172.31.17.163:22-50.85.169.122:47146.service - OpenSSH per-connection server daemon (50.85.169.122:47146). Apr 17 00:20:48.343756 sshd[2296]: Accepted publickey for core from 50.85.169.122 port 47146 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:48.345172 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:48.351260 systemd-logind[1959]: New session 5 of user core. Apr 17 00:20:48.358012 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 00:20:48.692025 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 00:20:48.692614 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:20:48.708121 sudo[2300]: pam_unix(sudo:session): session closed for user root Apr 17 00:20:48.872869 sshd[2299]: Connection closed by 50.85.169.122 port 47146 Apr 17 00:20:48.874059 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Apr 17 00:20:48.879240 systemd-logind[1959]: Session 5 logged out. Waiting for processes to exit. Apr 17 00:20:48.879483 systemd[1]: sshd@4-172.31.17.163:22-50.85.169.122:47146.service: Deactivated successfully. Apr 17 00:20:48.881583 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 00:20:48.883236 systemd-logind[1959]: Removed session 5. Apr 17 00:20:49.042501 systemd[1]: Started sshd@5-172.31.17.163:22-50.85.169.122:47152.service - OpenSSH per-connection server daemon (50.85.169.122:47152). Apr 17 00:20:49.893601 sshd[2306]: Accepted publickey for core from 50.85.169.122 port 47152 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:49.894446 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:49.900459 systemd-logind[1959]: New session 6 of user core. Apr 17 00:20:49.907087 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 00:20:50.221770 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 00:20:50.222146 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:20:50.226301 sudo[2311]: pam_unix(sudo:session): session closed for user root Apr 17 00:20:50.232011 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 17 00:20:50.232377 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:20:50.243105 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 00:20:50.282657 augenrules[2333]: No rules Apr 17 00:20:50.283994 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 00:20:50.284256 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 00:20:50.285513 sudo[2310]: pam_unix(sudo:session): session closed for user root Apr 17 00:20:50.447086 sshd[2309]: Connection closed by 50.85.169.122 port 47152 Apr 17 00:20:50.447963 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Apr 17 00:20:50.452932 systemd[1]: sshd@5-172.31.17.163:22-50.85.169.122:47152.service: Deactivated successfully. Apr 17 00:20:50.455335 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 00:20:50.456695 systemd-logind[1959]: Session 6 logged out. Waiting for processes to exit. Apr 17 00:20:50.458290 systemd-logind[1959]: Removed session 6. Apr 17 00:20:50.626240 systemd[1]: Started sshd@6-172.31.17.163:22-50.85.169.122:33962.service - OpenSSH per-connection server daemon (50.85.169.122:33962). Apr 17 00:20:51.493766 sshd[2342]: Accepted publickey for core from 50.85.169.122 port 33962 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:20:51.494937 sshd-session[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:20:51.500649 systemd-logind[1959]: New session 7 of user core. Apr 17 00:20:51.506933 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 00:20:51.827741 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 00:20:51.828096 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:20:52.236456 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 00:20:52.248300 (dockerd)[2365]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 00:20:52.552860 dockerd[2365]: time="2026-04-17T00:20:52.552440749Z" level=info msg="Starting up" Apr 17 00:20:52.553991 dockerd[2365]: time="2026-04-17T00:20:52.553869599Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 00:20:52.568632 dockerd[2365]: time="2026-04-17T00:20:52.568590942Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 00:20:52.587903 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1310674259-merged.mount: Deactivated successfully. Apr 17 00:20:52.624774 dockerd[2365]: time="2026-04-17T00:20:52.624700497Z" level=info msg="Loading containers: start." Apr 17 00:20:52.638741 kernel: Initializing XFRM netlink socket Apr 17 00:20:52.869798 (udev-worker)[2385]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:20:52.917985 systemd-networkd[1780]: docker0: Link UP Apr 17 00:20:52.924430 dockerd[2365]: time="2026-04-17T00:20:52.924360990Z" level=info msg="Loading containers: done." Apr 17 00:20:52.939678 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1323539384-merged.mount: Deactivated successfully. Apr 17 00:20:52.945476 dockerd[2365]: time="2026-04-17T00:20:52.945427222Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 00:20:52.945670 dockerd[2365]: time="2026-04-17T00:20:52.945551407Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 00:20:52.945670 dockerd[2365]: time="2026-04-17T00:20:52.945665424Z" level=info msg="Initializing buildkit" Apr 17 00:20:52.979995 dockerd[2365]: time="2026-04-17T00:20:52.979935991Z" level=info msg="Completed buildkit initialization" Apr 17 00:20:52.989076 dockerd[2365]: time="2026-04-17T00:20:52.989023534Z" level=info msg="Daemon has completed initialization" Apr 17 00:20:52.989076 dockerd[2365]: time="2026-04-17T00:20:52.989125694Z" level=info msg="API listen on /run/docker.sock" Apr 17 00:20:52.989992 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 00:20:54.851674 containerd[2002]: time="2026-04-17T00:20:54.851630215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 00:20:55.384107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 00:20:55.386626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:20:55.451509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102527195.mount: Deactivated successfully. Apr 17 00:20:55.687450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:20:55.698438 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:20:55.779282 kubelet[2597]: E0417 00:20:55.777920 2597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:20:55.790406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:20:55.790613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:20:55.791327 systemd[1]: kubelet.service: Consumed 210ms CPU time, 110.3M memory peak. Apr 17 00:20:56.854829 containerd[2002]: time="2026-04-17T00:20:56.854773669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:56.855970 containerd[2002]: time="2026-04-17T00:20:56.855922123Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100514" Apr 17 00:20:56.857632 containerd[2002]: time="2026-04-17T00:20:56.857573820Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:56.860763 containerd[2002]: time="2026-04-17T00:20:56.860695505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:56.861930 containerd[2002]: time="2026-04-17T00:20:56.861715986Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.010040637s" Apr 17 00:20:56.861930 containerd[2002]: time="2026-04-17T00:20:56.861772864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 00:20:56.862610 containerd[2002]: time="2026-04-17T00:20:56.862587147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 00:20:58.457168 containerd[2002]: time="2026-04-17T00:20:58.457102561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:58.464011 containerd[2002]: time="2026-04-17T00:20:58.463956497Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252738" Apr 17 00:20:58.471906 containerd[2002]: time="2026-04-17T00:20:58.471392440Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:58.481123 containerd[2002]: time="2026-04-17T00:20:58.481073211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:58.482440 containerd[2002]: time="2026-04-17T00:20:58.482402231Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.619574095s" Apr 17 00:20:58.482592 containerd[2002]: time="2026-04-17T00:20:58.482574814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 00:20:58.483099 containerd[2002]: time="2026-04-17T00:20:58.483075338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 00:20:59.717647 containerd[2002]: time="2026-04-17T00:20:59.717595546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:59.718957 containerd[2002]: time="2026-04-17T00:20:59.718908447Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810891" Apr 17 00:20:59.720611 containerd[2002]: time="2026-04-17T00:20:59.720511440Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:59.723682 containerd[2002]: time="2026-04-17T00:20:59.723642467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:59.724795 containerd[2002]: time="2026-04-17T00:20:59.724646937Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.241341631s" Apr 17 00:20:59.724795 containerd[2002]: time="2026-04-17T00:20:59.724686031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 00:20:59.725508 containerd[2002]: time="2026-04-17T00:20:59.725357967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 00:21:00.808851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272063768.mount: Deactivated successfully. Apr 17 00:21:01.197969 containerd[2002]: time="2026-04-17T00:21:01.197806179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:01.199579 containerd[2002]: time="2026-04-17T00:21:01.199528301Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972954" Apr 17 00:21:01.201173 containerd[2002]: time="2026-04-17T00:21:01.201111210Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:01.205320 containerd[2002]: time="2026-04-17T00:21:01.204608396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:01.205320 containerd[2002]: time="2026-04-17T00:21:01.205176686Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.479781601s" Apr 17 00:21:01.205320 containerd[2002]: time="2026-04-17T00:21:01.205211303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 00:21:01.205672 containerd[2002]: time="2026-04-17T00:21:01.205641560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 00:21:01.746521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386568326.mount: Deactivated successfully. Apr 17 00:21:03.109948 containerd[2002]: time="2026-04-17T00:21:03.109031758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:03.110559 containerd[2002]: time="2026-04-17T00:21:03.110296080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Apr 17 00:21:03.119552 containerd[2002]: time="2026-04-17T00:21:03.119259360Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:03.127257 containerd[2002]: time="2026-04-17T00:21:03.127170887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:03.128789 containerd[2002]: time="2026-04-17T00:21:03.128478754Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.922682555s" Apr 17 00:21:03.128789 containerd[2002]: time="2026-04-17T00:21:03.128521643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 00:21:03.130178 containerd[2002]: time="2026-04-17T00:21:03.130104518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 00:21:03.627805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103522771.mount: Deactivated successfully. Apr 17 00:21:03.638436 containerd[2002]: time="2026-04-17T00:21:03.638377335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:03.640294 containerd[2002]: time="2026-04-17T00:21:03.640245933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 17 00:21:03.642867 containerd[2002]: time="2026-04-17T00:21:03.642781730Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:03.646407 containerd[2002]: time="2026-04-17T00:21:03.646337218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:03.647257 containerd[2002]: time="2026-04-17T00:21:03.647024434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 516.881657ms" Apr 17 00:21:03.647257 containerd[2002]: time="2026-04-17T00:21:03.647059511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 00:21:03.647781 containerd[2002]: time="2026-04-17T00:21:03.647758546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 00:21:04.177135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143139755.mount: Deactivated successfully. Apr 17 00:21:05.272066 containerd[2002]: time="2026-04-17T00:21:05.272009854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:05.275018 containerd[2002]: time="2026-04-17T00:21:05.274969668Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874817" Apr 17 00:21:05.279340 containerd[2002]: time="2026-04-17T00:21:05.278714241Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:05.284752 containerd[2002]: time="2026-04-17T00:21:05.283970871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:05.284893 containerd[2002]: time="2026-04-17T00:21:05.284761337Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.63696931s" Apr 17 00:21:05.284893 containerd[2002]: time="2026-04-17T00:21:05.284797708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 00:21:06.040531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 00:21:06.045975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:06.346913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:06.358473 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:21:06.432016 kubelet[2811]: E0417 00:21:06.431968 2811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:21:06.434359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:21:06.434541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:21:06.435209 systemd[1]: kubelet.service: Consumed 214ms CPU time, 109.3M memory peak. Apr 17 00:21:08.000445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:08.000708 systemd[1]: kubelet.service: Consumed 214ms CPU time, 109.3M memory peak. Apr 17 00:21:08.003714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:08.040406 systemd[1]: Reload requested from client PID 2825 ('systemctl') (unit session-7.scope)... Apr 17 00:21:08.040427 systemd[1]: Reloading... Apr 17 00:21:08.145806 zram_generator::config[2866]: No configuration found. Apr 17 00:21:08.484254 systemd[1]: Reloading finished in 443 ms. Apr 17 00:21:08.524927 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 00:21:08.525245 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 00:21:08.525680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:08.525913 systemd[1]: kubelet.service: Consumed 130ms CPU time, 97.4M memory peak. Apr 17 00:21:08.529291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:09.123285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:09.135211 (kubelet)[2930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:21:09.203674 kubelet[2930]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 00:21:09.203674 kubelet[2930]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:21:09.204108 kubelet[2930]: I0417 00:21:09.203740 2930 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 00:21:09.738610 kubelet[2930]: I0417 00:21:09.738563 2930 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 00:21:09.738610 kubelet[2930]: I0417 00:21:09.738593 2930 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 00:21:09.739566 kubelet[2930]: I0417 00:21:09.739542 2930 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 00:21:09.739646 kubelet[2930]: I0417 00:21:09.739572 2930 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 00:21:09.739941 kubelet[2930]: I0417 00:21:09.739917 2930 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 00:21:09.748983 kubelet[2930]: E0417 00:21:09.748939 2930 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:21:09.749121 kubelet[2930]: I0417 00:21:09.749070 2930 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 00:21:09.756250 kubelet[2930]: I0417 00:21:09.756196 2930 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 00:21:09.760148 kubelet[2930]: I0417 00:21:09.759763 2930 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 00:21:09.761757 kubelet[2930]: I0417 00:21:09.761003 2930 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 00:21:09.761757 kubelet[2930]: I0417 00:21:09.761055 2930 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 00:21:09.761757 kubelet[2930]: I0417 00:21:09.761382 2930 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 00:21:09.761757 kubelet[2930]: I0417 00:21:09.761395 2930 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 00:21:09.762134 kubelet[2930]: I0417 00:21:09.761514 2930 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 00:21:09.764765 kubelet[2930]: I0417 00:21:09.764744 2930 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:21:09.765056 kubelet[2930]: I0417 00:21:09.765035 2930 kubelet.go:475] "Attempting to sync node with API server" Apr 17 00:21:09.765056 kubelet[2930]: I0417 00:21:09.765056 2930 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 00:21:09.765194 kubelet[2930]: I0417 00:21:09.765083 2930 kubelet.go:387] "Adding apiserver pod source" Apr 17 00:21:09.765194 kubelet[2930]: I0417 00:21:09.765100 2930 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 00:21:09.767887 kubelet[2930]: E0417 00:21:09.767853 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.163:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:21:09.768060 kubelet[2930]: E0417 00:21:09.767986 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-163&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:21:09.768276 kubelet[2930]: I0417 00:21:09.768256 2930 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 00:21:09.769234 kubelet[2930]: I0417 00:21:09.769012 2930 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 00:21:09.769234 kubelet[2930]: I0417 00:21:09.769064 2930 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 00:21:09.769234 kubelet[2930]: W0417 00:21:09.769121 2930 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 00:21:09.776082 kubelet[2930]: I0417 00:21:09.776049 2930 server.go:1262] "Started kubelet" Apr 17 00:21:09.776355 kubelet[2930]: I0417 00:21:09.776325 2930 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 00:21:09.777207 kubelet[2930]: I0417 00:21:09.777177 2930 server.go:310] "Adding debug handlers to kubelet server" Apr 17 00:21:09.782716 kubelet[2930]: I0417 00:21:09.782264 2930 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 00:21:09.782716 kubelet[2930]: I0417 00:21:09.782328 2930 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 00:21:09.782716 kubelet[2930]: I0417 00:21:09.782563 2930 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 00:21:09.782944 kubelet[2930]: I0417 00:21:09.782767 2930 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 00:21:09.786161 kubelet[2930]: E0417 00:21:09.784264 2930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-163.18a6fcfdbd3c54c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-163,UID:ip-172-31-17-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-163,},FirstTimestamp:2026-04-17 00:21:09.776012482 +0000 UTC m=+0.616620553,LastTimestamp:2026-04-17 00:21:09.776012482 +0000 UTC m=+0.616620553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-163,}" Apr 17 00:21:09.787862 kubelet[2930]: I0417 00:21:09.787816 2930 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 00:21:09.792995 kubelet[2930]: E0417 00:21:09.792969 2930 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Apr 17 00:21:09.793252 kubelet[2930]: I0417 00:21:09.793149 2930 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 00:21:09.793657 kubelet[2930]: I0417 00:21:09.793525 2930 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 00:21:09.793657 kubelet[2930]: I0417 00:21:09.793568 2930 reconciler.go:29] "Reconciler: start to sync state" Apr 17 00:21:09.794396 kubelet[2930]: E0417 00:21:09.794369 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:21:09.795776 kubelet[2930]: I0417 00:21:09.794816 2930 factory.go:223] Registration of the systemd container factory successfully Apr 17 00:21:09.795776 kubelet[2930]: I0417 00:21:09.794920 2930 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 00:21:09.796972 kubelet[2930]: E0417 00:21:09.796927 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="200ms" Apr 17 00:21:09.797279 kubelet[2930]: E0417 00:21:09.797247 2930 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 00:21:09.798085 kubelet[2930]: I0417 00:21:09.797986 2930 factory.go:223] Registration of the containerd container factory successfully Apr 17 00:21:09.813767 kubelet[2930]: I0417 00:21:09.812754 2930 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 00:21:09.814291 kubelet[2930]: I0417 00:21:09.814257 2930 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 00:21:09.814291 kubelet[2930]: I0417 00:21:09.814286 2930 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 00:21:09.814418 kubelet[2930]: I0417 00:21:09.814319 2930 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 00:21:09.814418 kubelet[2930]: E0417 00:21:09.814368 2930 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:21:09.824222 kubelet[2930]: E0417 00:21:09.824126 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:21:09.835168 kubelet[2930]: I0417 00:21:09.835067 2930 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 00:21:09.835168 kubelet[2930]: I0417 00:21:09.835110 2930 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 00:21:09.835168 kubelet[2930]: I0417 00:21:09.835132 2930 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:21:09.838489 kubelet[2930]: I0417 00:21:09.838255 2930 policy_none.go:49] "None policy: Start" Apr 17 00:21:09.838489 kubelet[2930]: I0417 00:21:09.838278 2930 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 00:21:09.838489 kubelet[2930]: I0417 00:21:09.838290 2930 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 00:21:09.840342 kubelet[2930]: I0417 00:21:09.840323 2930 policy_none.go:47] "Start" Apr 17 00:21:09.845295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 00:21:09.857462 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 00:21:09.862606 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 00:21:09.873910 kubelet[2930]: E0417 00:21:09.873880 2930 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 00:21:09.874246 kubelet[2930]: I0417 00:21:09.874227 2930 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 00:21:09.874317 kubelet[2930]: I0417 00:21:09.874250 2930 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 00:21:09.874845 kubelet[2930]: I0417 00:21:09.874819 2930 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 00:21:09.877193 kubelet[2930]: E0417 00:21:09.877170 2930 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 00:21:09.877365 kubelet[2930]: E0417 00:21:09.877228 2930 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-163\" not found" Apr 17 00:21:09.930137 systemd[1]: Created slice kubepods-burstable-pod7656fc4fbbf525d06f0cd371ac4708b7.slice - libcontainer container kubepods-burstable-pod7656fc4fbbf525d06f0cd371ac4708b7.slice. Apr 17 00:21:09.937472 kubelet[2930]: E0417 00:21:09.937436 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:09.940286 systemd[1]: Created slice kubepods-burstable-podb53b19bb6e6b6afd4ef093ca912ac353.slice - libcontainer container kubepods-burstable-podb53b19bb6e6b6afd4ef093ca912ac353.slice. Apr 17 00:21:09.950626 kubelet[2930]: E0417 00:21:09.950557 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:09.953899 systemd[1]: Created slice kubepods-burstable-pod42ae22b25a0a57a16ab16b6869a17e70.slice - libcontainer container kubepods-burstable-pod42ae22b25a0a57a16ab16b6869a17e70.slice. Apr 17 00:21:09.956143 kubelet[2930]: E0417 00:21:09.956115 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:09.976775 kubelet[2930]: I0417 00:21:09.976750 2930 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Apr 17 00:21:09.977145 kubelet[2930]: E0417 00:21:09.977114 2930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.163:6443/api/v1/nodes\": dial tcp 172.31.17.163:6443: connect: connection refused" node="ip-172-31-17-163" Apr 17 00:21:09.995345 kubelet[2930]: I0417 00:21:09.994469 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7656fc4fbbf525d06f0cd371ac4708b7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"7656fc4fbbf525d06f0cd371ac4708b7\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:09.994996 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 00:21:09.995889 kubelet[2930]: I0417 00:21:09.995670 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:09.995889 kubelet[2930]: I0417 00:21:09.995705 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:09.996174 kubelet[2930]: I0417 00:21:09.996076 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42ae22b25a0a57a16ab16b6869a17e70-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-163\" (UID: \"42ae22b25a0a57a16ab16b6869a17e70\") " pod="kube-system/kube-scheduler-ip-172-31-17-163" Apr 17 00:21:09.996174 kubelet[2930]: I0417 00:21:09.996127 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7656fc4fbbf525d06f0cd371ac4708b7-ca-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"7656fc4fbbf525d06f0cd371ac4708b7\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:09.996174 kubelet[2930]: I0417 00:21:09.996154 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:09.996467 kubelet[2930]: I0417 00:21:09.996450 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:09.996586 kubelet[2930]: I0417 00:21:09.996569 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:09.996740 kubelet[2930]: I0417 00:21:09.996705 2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7656fc4fbbf525d06f0cd371ac4708b7-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"7656fc4fbbf525d06f0cd371ac4708b7\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:09.997902 kubelet[2930]: E0417 00:21:09.997860 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="400ms" Apr 17 00:21:10.179839 kubelet[2930]: I0417 00:21:10.179707 2930 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Apr 17 00:21:10.180859 kubelet[2930]: E0417 00:21:10.180641 2930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.163:6443/api/v1/nodes\": dial tcp 172.31.17.163:6443: connect: connection refused" node="ip-172-31-17-163" Apr 17 00:21:10.242209 containerd[2002]: time="2026-04-17T00:21:10.242157064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-163,Uid:7656fc4fbbf525d06f0cd371ac4708b7,Namespace:kube-system,Attempt:0,}" Apr 17 00:21:10.254524 containerd[2002]: time="2026-04-17T00:21:10.254395222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-163,Uid:b53b19bb6e6b6afd4ef093ca912ac353,Namespace:kube-system,Attempt:0,}" Apr 17 00:21:10.260101 containerd[2002]: time="2026-04-17T00:21:10.259669612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-163,Uid:42ae22b25a0a57a16ab16b6869a17e70,Namespace:kube-system,Attempt:0,}" Apr 17 00:21:10.399393 kubelet[2930]: E0417 00:21:10.399347 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="800ms" Apr 17 00:21:10.583303 kubelet[2930]: I0417 00:21:10.583203 2930 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Apr 17 00:21:10.583649 kubelet[2930]: E0417 00:21:10.583608 2930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.163:6443/api/v1/nodes\": dial tcp 172.31.17.163:6443: connect: connection refused" node="ip-172-31-17-163" Apr 17 00:21:10.724122 kubelet[2930]: E0417 00:21:10.724080 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-163&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:21:10.744472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374923589.mount: Deactivated successfully. Apr 17 00:21:10.754870 containerd[2002]: time="2026-04-17T00:21:10.754805984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:21:10.759691 containerd[2002]: time="2026-04-17T00:21:10.759394476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 17 00:21:10.760754 containerd[2002]: time="2026-04-17T00:21:10.760695725Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:21:10.762286 containerd[2002]: time="2026-04-17T00:21:10.762235949Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:21:10.764383 containerd[2002]: time="2026-04-17T00:21:10.764341181Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:21:10.765576 containerd[2002]: time="2026-04-17T00:21:10.765540832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 00:21:10.766820 containerd[2002]: time="2026-04-17T00:21:10.766537360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 00:21:10.768457 containerd[2002]: time="2026-04-17T00:21:10.768421001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:21:10.769327 containerd[2002]: time="2026-04-17T00:21:10.769285230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.68365ms" Apr 17 00:21:10.771371 containerd[2002]: time="2026-04-17T00:21:10.771326327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.724971ms" Apr 17 00:21:10.772463 containerd[2002]: time="2026-04-17T00:21:10.772428053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.786477ms" Apr 17 00:21:10.838237 containerd[2002]: time="2026-04-17T00:21:10.837848341Z" level=info msg="connecting to shim ace88ccd3e7b34d38553310d3e7b0204edfdf811161fe4055e0fb301a4d3692c" address="unix:///run/containerd/s/b4a3f3da657307cfa9345ef19961fab8e078a704a05e17a8dbdff30aad5bcf57" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:10.841327 containerd[2002]: time="2026-04-17T00:21:10.841243944Z" level=info msg="connecting to shim 5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71" address="unix:///run/containerd/s/79c6718fc71b86ab0b06aa70c6d0e3321275921faab83bbfc06de69c344c6d71" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:10.861934 containerd[2002]: time="2026-04-17T00:21:10.861835611Z" level=info msg="connecting to shim 80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f" address="unix:///run/containerd/s/a0da552cce44bd85c09e26ca8225a5ba2fd181d36dfd4020718f45fb7453b65e" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:10.877985 systemd[1]: Started cri-containerd-5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71.scope - libcontainer container 5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71. Apr 17 00:21:10.897162 systemd[1]: Started cri-containerd-ace88ccd3e7b34d38553310d3e7b0204edfdf811161fe4055e0fb301a4d3692c.scope - libcontainer container ace88ccd3e7b34d38553310d3e7b0204edfdf811161fe4055e0fb301a4d3692c. Apr 17 00:21:10.923056 kubelet[2930]: E0417 00:21:10.922482 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:21:10.929942 systemd[1]: Started cri-containerd-80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f.scope - libcontainer container 80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f. Apr 17 00:21:11.044643 kubelet[2930]: E0417 00:21:11.044588 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:21:11.046066 containerd[2002]: time="2026-04-17T00:21:11.045808354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-163,Uid:b53b19bb6e6b6afd4ef093ca912ac353,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71\"" Apr 17 00:21:11.059986 containerd[2002]: time="2026-04-17T00:21:11.059840539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-163,Uid:7656fc4fbbf525d06f0cd371ac4708b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ace88ccd3e7b34d38553310d3e7b0204edfdf811161fe4055e0fb301a4d3692c\"" Apr 17 00:21:11.063643 containerd[2002]: time="2026-04-17T00:21:11.063600705Z" level=info msg="CreateContainer within sandbox \"5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 00:21:11.068286 containerd[2002]: time="2026-04-17T00:21:11.067509215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-163,Uid:42ae22b25a0a57a16ab16b6869a17e70,Namespace:kube-system,Attempt:0,} returns sandbox id \"80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f\"" Apr 17 00:21:11.086317 containerd[2002]: time="2026-04-17T00:21:11.086270142Z" level=info msg="Container c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:11.099790 containerd[2002]: time="2026-04-17T00:21:11.099455487Z" level=info msg="CreateContainer within sandbox \"ace88ccd3e7b34d38553310d3e7b0204edfdf811161fe4055e0fb301a4d3692c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 00:21:11.102515 containerd[2002]: time="2026-04-17T00:21:11.102472553Z" level=info msg="CreateContainer within sandbox \"80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 00:21:11.105233 containerd[2002]: time="2026-04-17T00:21:11.105191584Z" level=info msg="CreateContainer within sandbox \"5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f\"" Apr 17 00:21:11.106541 containerd[2002]: time="2026-04-17T00:21:11.106503787Z" level=info msg="StartContainer for \"c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f\"" Apr 17 00:21:11.107808 containerd[2002]: time="2026-04-17T00:21:11.107710295Z" level=info msg="connecting to shim c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f" address="unix:///run/containerd/s/79c6718fc71b86ab0b06aa70c6d0e3321275921faab83bbfc06de69c344c6d71" protocol=ttrpc version=3 Apr 17 00:21:11.115270 containerd[2002]: time="2026-04-17T00:21:11.115218350Z" level=info msg="Container 6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:11.117333 containerd[2002]: time="2026-04-17T00:21:11.117286737Z" level=info msg="Container 81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:11.128245 containerd[2002]: time="2026-04-17T00:21:11.128197115Z" level=info msg="CreateContainer within sandbox \"ace88ccd3e7b34d38553310d3e7b0204edfdf811161fe4055e0fb301a4d3692c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab\"" Apr 17 00:21:11.129763 containerd[2002]: time="2026-04-17T00:21:11.128782642Z" level=info msg="StartContainer for \"81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab\"" Apr 17 00:21:11.131176 containerd[2002]: time="2026-04-17T00:21:11.131142042Z" level=info msg="connecting to shim 81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab" address="unix:///run/containerd/s/b4a3f3da657307cfa9345ef19961fab8e078a704a05e17a8dbdff30aad5bcf57" protocol=ttrpc version=3 Apr 17 00:21:11.136194 kubelet[2930]: E0417 00:21:11.136151 2930 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.163:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:21:11.138100 containerd[2002]: time="2026-04-17T00:21:11.138064604Z" level=info msg="CreateContainer within sandbox \"80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43\"" Apr 17 00:21:11.139911 containerd[2002]: time="2026-04-17T00:21:11.139882158Z" level=info msg="StartContainer for \"6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43\"" Apr 17 00:21:11.142040 containerd[2002]: time="2026-04-17T00:21:11.142003796Z" level=info msg="connecting to shim 6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43" address="unix:///run/containerd/s/a0da552cce44bd85c09e26ca8225a5ba2fd181d36dfd4020718f45fb7453b65e" protocol=ttrpc version=3 Apr 17 00:21:11.145038 systemd[1]: Started cri-containerd-c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f.scope - libcontainer container c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f. Apr 17 00:21:11.173066 systemd[1]: Started cri-containerd-81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab.scope - libcontainer container 81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab. Apr 17 00:21:11.190939 systemd[1]: Started cri-containerd-6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43.scope - libcontainer container 6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43. Apr 17 00:21:11.201329 kubelet[2930]: E0417 00:21:11.201276 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="1.6s" Apr 17 00:21:11.248247 containerd[2002]: time="2026-04-17T00:21:11.248123761Z" level=info msg="StartContainer for \"c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f\" returns successfully" Apr 17 00:21:11.279577 containerd[2002]: time="2026-04-17T00:21:11.279470935Z" level=info msg="StartContainer for \"81845e21860a67eff7feb14bccefb7cf05b538028de4eebc7619918f67f185ab\" returns successfully" Apr 17 00:21:11.348096 containerd[2002]: time="2026-04-17T00:21:11.348048011Z" level=info msg="StartContainer for \"6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43\" returns successfully" Apr 17 00:21:11.389366 kubelet[2930]: I0417 00:21:11.389229 2930 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Apr 17 00:21:11.843381 kubelet[2930]: E0417 00:21:11.843263 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:11.851515 kubelet[2930]: E0417 00:21:11.851461 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:11.853377 kubelet[2930]: E0417 00:21:11.853194 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:12.868752 kubelet[2930]: E0417 00:21:12.868700 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:12.874747 kubelet[2930]: E0417 00:21:12.874471 2930 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:13.157775 kubelet[2930]: E0417 00:21:13.157654 2930 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Apr 17 00:21:13.261748 kubelet[2930]: I0417 00:21:13.261610 2930 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-163" Apr 17 00:21:13.297587 kubelet[2930]: I0417 00:21:13.297226 2930 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:13.311392 kubelet[2930]: E0417 00:21:13.311358 2930 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:13.311589 kubelet[2930]: I0417 00:21:13.311575 2930 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-163" Apr 17 00:21:13.314200 kubelet[2930]: E0417 00:21:13.314028 2930 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-163" Apr 17 00:21:13.314200 kubelet[2930]: I0417 00:21:13.314058 2930 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:13.316849 kubelet[2930]: E0417 00:21:13.316812 2930 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:13.769916 kubelet[2930]: I0417 00:21:13.769873 2930 apiserver.go:52] "Watching apiserver" Apr 17 00:21:13.793945 kubelet[2930]: I0417 00:21:13.793891 2930 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 00:21:13.860092 kubelet[2930]: I0417 00:21:13.860054 2930 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:13.862989 kubelet[2930]: E0417 00:21:13.862884 2930 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:15.616108 systemd[1]: Reload requested from client PID 3213 ('systemctl') (unit session-7.scope)... Apr 17 00:21:15.616126 systemd[1]: Reloading... Apr 17 00:21:15.746754 zram_generator::config[3257]: No configuration found. Apr 17 00:21:16.027679 systemd[1]: Reloading finished in 410 ms. Apr 17 00:21:16.062770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:16.082041 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 00:21:16.082371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:16.082447 systemd[1]: kubelet.service: Consumed 1.095s CPU time, 119.3M memory peak. Apr 17 00:21:16.088776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:16.341496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:16.352493 (kubelet)[3317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:21:16.428876 kubelet[3317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 00:21:16.429205 kubelet[3317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:21:16.429359 kubelet[3317]: I0417 00:21:16.429331 3317 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 00:21:16.437299 kubelet[3317]: I0417 00:21:16.437262 3317 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 00:21:16.437299 kubelet[3317]: I0417 00:21:16.437290 3317 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 00:21:16.438610 kubelet[3317]: I0417 00:21:16.438580 3317 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 00:21:16.438610 kubelet[3317]: I0417 00:21:16.438614 3317 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 00:21:16.439028 kubelet[3317]: I0417 00:21:16.438999 3317 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 00:21:16.440210 kubelet[3317]: I0417 00:21:16.440185 3317 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 00:21:16.453534 kubelet[3317]: I0417 00:21:16.453379 3317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 00:21:16.464707 kubelet[3317]: I0417 00:21:16.463856 3317 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 00:21:16.467820 kubelet[3317]: I0417 00:21:16.467791 3317 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 00:21:16.469423 kubelet[3317]: I0417 00:21:16.469375 3317 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 00:21:16.469652 kubelet[3317]: I0417 00:21:16.469424 3317 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 00:21:16.469652 kubelet[3317]: I0417 00:21:16.469627 3317 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 00:21:16.469652 kubelet[3317]: I0417 00:21:16.469641 3317 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 00:21:16.469882 kubelet[3317]: I0417 00:21:16.469676 3317 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 00:21:16.470113 kubelet[3317]: I0417 00:21:16.469960 3317 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:21:16.470330 kubelet[3317]: I0417 00:21:16.470317 3317 kubelet.go:475] "Attempting to sync node with API server" Apr 17 00:21:16.470616 kubelet[3317]: I0417 00:21:16.470336 3317 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 00:21:16.470616 kubelet[3317]: I0417 00:21:16.470367 3317 kubelet.go:387] "Adding apiserver pod source" Apr 17 00:21:16.470616 kubelet[3317]: I0417 00:21:16.470384 3317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 00:21:16.478616 kubelet[3317]: I0417 00:21:16.478363 3317 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 00:21:16.479588 kubelet[3317]: I0417 00:21:16.479323 3317 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 00:21:16.479588 kubelet[3317]: I0417 00:21:16.479369 3317 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 00:21:16.483964 kubelet[3317]: I0417 00:21:16.483933 3317 server.go:1262] "Started kubelet" Apr 17 00:21:16.489091 kubelet[3317]: I0417 00:21:16.489043 3317 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 00:21:16.489218 kubelet[3317]: I0417 00:21:16.489109 3317 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 00:21:16.489523 kubelet[3317]: I0417 00:21:16.489407 3317 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 00:21:16.489523 kubelet[3317]: I0417 00:21:16.489492 3317 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 00:21:16.490771 kubelet[3317]: I0417 00:21:16.490550 3317 server.go:310] "Adding debug handlers to kubelet server" Apr 17 00:21:16.494742 kubelet[3317]: I0417 00:21:16.491555 3317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 00:21:16.508604 kubelet[3317]: I0417 00:21:16.508543 3317 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 00:21:16.509034 kubelet[3317]: I0417 00:21:16.509010 3317 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 00:21:16.509213 kubelet[3317]: I0417 00:21:16.509140 3317 reconciler.go:29] "Reconciler: start to sync state" Apr 17 00:21:16.512681 kubelet[3317]: I0417 00:21:16.512581 3317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 00:21:16.515893 kubelet[3317]: I0417 00:21:16.515400 3317 factory.go:223] Registration of the systemd container factory successfully Apr 17 00:21:16.515893 kubelet[3317]: I0417 00:21:16.515508 3317 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 00:21:16.518194 kubelet[3317]: E0417 00:21:16.518166 3317 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 00:21:16.524655 kubelet[3317]: I0417 00:21:16.523761 3317 factory.go:223] Registration of the containerd container factory successfully Apr 17 00:21:16.543925 kubelet[3317]: I0417 00:21:16.543520 3317 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 00:21:16.547757 kubelet[3317]: I0417 00:21:16.547508 3317 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 00:21:16.547757 kubelet[3317]: I0417 00:21:16.547537 3317 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 00:21:16.547757 kubelet[3317]: I0417 00:21:16.547567 3317 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 00:21:16.547757 kubelet[3317]: E0417 00:21:16.547623 3317 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592022 3317 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592043 3317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592066 3317 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592223 3317 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592233 3317 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592254 3317 policy_none.go:49] "None policy: Start" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592264 3317 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592275 3317 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592400 3317 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 00:21:16.593845 kubelet[3317]: I0417 00:21:16.592409 3317 policy_none.go:47] "Start" Apr 17 00:21:16.603960 kubelet[3317]: E0417 00:21:16.603925 3317 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 00:21:16.604213 kubelet[3317]: I0417 00:21:16.604195 3317 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 00:21:16.604305 kubelet[3317]: I0417 00:21:16.604215 3317 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 00:21:16.607560 kubelet[3317]: I0417 00:21:16.607320 3317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 00:21:16.614379 kubelet[3317]: E0417 00:21:16.614347 3317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 00:21:16.648551 kubelet[3317]: I0417 00:21:16.648494 3317 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-163" Apr 17 00:21:16.648874 kubelet[3317]: I0417 00:21:16.648849 3317 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:16.657186 kubelet[3317]: I0417 00:21:16.655757 3317 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:16.712754 kubelet[3317]: I0417 00:21:16.712707 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42ae22b25a0a57a16ab16b6869a17e70-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-163\" (UID: \"42ae22b25a0a57a16ab16b6869a17e70\") " pod="kube-system/kube-scheduler-ip-172-31-17-163" Apr 17 00:21:16.712916 kubelet[3317]: I0417 00:21:16.712807 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7656fc4fbbf525d06f0cd371ac4708b7-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"7656fc4fbbf525d06f0cd371ac4708b7\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:16.712916 kubelet[3317]: I0417 00:21:16.712837 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:16.712916 kubelet[3317]: I0417 00:21:16.712864 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:16.712916 kubelet[3317]: I0417 00:21:16.712886 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7656fc4fbbf525d06f0cd371ac4708b7-ca-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"7656fc4fbbf525d06f0cd371ac4708b7\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:16.713112 kubelet[3317]: I0417 00:21:16.712931 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7656fc4fbbf525d06f0cd371ac4708b7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"7656fc4fbbf525d06f0cd371ac4708b7\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:16.713112 kubelet[3317]: I0417 00:21:16.712959 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:16.713112 kubelet[3317]: I0417 00:21:16.712989 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:16.713112 kubelet[3317]: I0417 00:21:16.713014 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b53b19bb6e6b6afd4ef093ca912ac353-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b53b19bb6e6b6afd4ef093ca912ac353\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Apr 17 00:21:16.725316 kubelet[3317]: I0417 00:21:16.725282 3317 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Apr 17 00:21:16.734468 kubelet[3317]: I0417 00:21:16.734426 3317 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-163" Apr 17 00:21:16.734593 kubelet[3317]: I0417 00:21:16.734520 3317 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-163" Apr 17 00:21:17.477401 kubelet[3317]: I0417 00:21:17.477365 3317 apiserver.go:52] "Watching apiserver" Apr 17 00:21:17.510140 kubelet[3317]: I0417 00:21:17.510081 3317 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 00:21:17.572261 kubelet[3317]: I0417 00:21:17.571672 3317 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:17.583262 kubelet[3317]: E0417 00:21:17.583229 3317 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-163\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-163" Apr 17 00:21:17.612473 kubelet[3317]: I0417 00:21:17.612411 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-163" podStartSLOduration=1.612389216 podStartE2EDuration="1.612389216s" podCreationTimestamp="2026-04-17 00:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:21:17.601062174 +0000 UTC m=+1.233749019" watchObservedRunningTime="2026-04-17 00:21:17.612389216 +0000 UTC m=+1.245076064" Apr 17 00:21:17.622913 kubelet[3317]: I0417 00:21:17.622850 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-163" podStartSLOduration=1.622834444 podStartE2EDuration="1.622834444s" podCreationTimestamp="2026-04-17 00:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:21:17.613436893 +0000 UTC m=+1.246123753" watchObservedRunningTime="2026-04-17 00:21:17.622834444 +0000 UTC m=+1.255521287" Apr 17 00:21:17.634622 kubelet[3317]: I0417 00:21:17.634570 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-163" podStartSLOduration=1.634557355 podStartE2EDuration="1.634557355s" podCreationTimestamp="2026-04-17 00:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:21:17.623148472 +0000 UTC m=+1.255835314" watchObservedRunningTime="2026-04-17 00:21:17.634557355 +0000 UTC m=+1.267244199" Apr 17 00:21:21.098289 kubelet[3317]: I0417 00:21:21.098253 3317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 00:21:21.098856 kubelet[3317]: I0417 00:21:21.098835 3317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 00:21:21.098916 containerd[2002]: time="2026-04-17T00:21:21.098618382Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 00:21:22.199025 systemd[1]: Created slice kubepods-besteffort-podcecf7ad6_fdb6_4c02_bcd8_07b7b4423fcf.slice - libcontainer container kubepods-besteffort-podcecf7ad6_fdb6_4c02_bcd8_07b7b4423fcf.slice. Apr 17 00:21:22.250814 kubelet[3317]: I0417 00:21:22.250750 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf-kube-proxy\") pod \"kube-proxy-nflwm\" (UID: \"cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf\") " pod="kube-system/kube-proxy-nflwm" Apr 17 00:21:22.250814 kubelet[3317]: I0417 00:21:22.250798 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf-xtables-lock\") pod \"kube-proxy-nflwm\" (UID: \"cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf\") " pod="kube-system/kube-proxy-nflwm" Apr 17 00:21:22.251315 kubelet[3317]: I0417 00:21:22.250829 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf-lib-modules\") pod \"kube-proxy-nflwm\" (UID: \"cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf\") " pod="kube-system/kube-proxy-nflwm" Apr 17 00:21:22.251315 kubelet[3317]: I0417 00:21:22.250853 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrz9\" (UniqueName: \"kubernetes.io/projected/cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf-kube-api-access-bhrz9\") pod \"kube-proxy-nflwm\" (UID: \"cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf\") " pod="kube-system/kube-proxy-nflwm" Apr 17 00:21:22.323575 systemd[1]: Created slice kubepods-besteffort-pod92336033_8312_44d1_be63_69b831c9d7eb.slice - libcontainer container kubepods-besteffort-pod92336033_8312_44d1_be63_69b831c9d7eb.slice. Apr 17 00:21:22.352622 kubelet[3317]: I0417 00:21:22.351788 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92336033-8312-44d1-be63-69b831c9d7eb-var-lib-calico\") pod \"tigera-operator-5588576f44-26dsz\" (UID: \"92336033-8312-44d1-be63-69b831c9d7eb\") " pod="tigera-operator/tigera-operator-5588576f44-26dsz" Apr 17 00:21:22.352622 kubelet[3317]: I0417 00:21:22.351853 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzvd\" (UniqueName: \"kubernetes.io/projected/92336033-8312-44d1-be63-69b831c9d7eb-kube-api-access-hnzvd\") pod \"tigera-operator-5588576f44-26dsz\" (UID: \"92336033-8312-44d1-be63-69b831c9d7eb\") " pod="tigera-operator/tigera-operator-5588576f44-26dsz" Apr 17 00:21:22.513560 containerd[2002]: time="2026-04-17T00:21:22.513503644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nflwm,Uid:cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf,Namespace:kube-system,Attempt:0,}" Apr 17 00:21:22.574984 containerd[2002]: time="2026-04-17T00:21:22.574860710Z" level=info msg="connecting to shim 848af1d7ddd24fe89f3cd630347bf34e987d991d1336070e216ad99e2a05c1f7" address="unix:///run/containerd/s/be5e788a601a71129b1a6ea5f6c9e230ca75ab478bafc97f43d57a4392956fa5" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:22.608966 systemd[1]: Started cri-containerd-848af1d7ddd24fe89f3cd630347bf34e987d991d1336070e216ad99e2a05c1f7.scope - libcontainer container 848af1d7ddd24fe89f3cd630347bf34e987d991d1336070e216ad99e2a05c1f7. Apr 17 00:21:22.632744 containerd[2002]: time="2026-04-17T00:21:22.632678838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-26dsz,Uid:92336033-8312-44d1-be63-69b831c9d7eb,Namespace:tigera-operator,Attempt:0,}" Apr 17 00:21:22.643153 containerd[2002]: time="2026-04-17T00:21:22.643111399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nflwm,Uid:cecf7ad6-fdb6-4c02-bcd8-07b7b4423fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"848af1d7ddd24fe89f3cd630347bf34e987d991d1336070e216ad99e2a05c1f7\"" Apr 17 00:21:22.657283 containerd[2002]: time="2026-04-17T00:21:22.657243290Z" level=info msg="CreateContainer within sandbox \"848af1d7ddd24fe89f3cd630347bf34e987d991d1336070e216ad99e2a05c1f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 00:21:22.674424 containerd[2002]: time="2026-04-17T00:21:22.674376354Z" level=info msg="connecting to shim e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8" address="unix:///run/containerd/s/f982c24ea0bdaa46d0796efc40ada4e70ba41243fbc130284d1fe528039e4cc1" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:22.676298 containerd[2002]: time="2026-04-17T00:21:22.676201089Z" level=info msg="Container b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:22.697129 containerd[2002]: time="2026-04-17T00:21:22.697068205Z" level=info msg="CreateContainer within sandbox \"848af1d7ddd24fe89f3cd630347bf34e987d991d1336070e216ad99e2a05c1f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593\"" Apr 17 00:21:22.698788 containerd[2002]: time="2026-04-17T00:21:22.697929347Z" level=info msg="StartContainer for \"b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593\"" Apr 17 00:21:22.702751 containerd[2002]: time="2026-04-17T00:21:22.701995540Z" level=info msg="connecting to shim b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593" address="unix:///run/containerd/s/be5e788a601a71129b1a6ea5f6c9e230ca75ab478bafc97f43d57a4392956fa5" protocol=ttrpc version=3 Apr 17 00:21:22.719119 systemd[1]: Started cri-containerd-e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8.scope - libcontainer container e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8. Apr 17 00:21:22.728406 systemd[1]: Started cri-containerd-b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593.scope - libcontainer container b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593. Apr 17 00:21:22.822061 containerd[2002]: time="2026-04-17T00:21:22.821297010Z" level=info msg="StartContainer for \"b190ba6703864d27a4259688170821e0b391d814402add43b7c26200cc0d7593\" returns successfully" Apr 17 00:21:22.822061 containerd[2002]: time="2026-04-17T00:21:22.821368704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-26dsz,Uid:92336033-8312-44d1-be63-69b831c9d7eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8\"" Apr 17 00:21:22.829527 containerd[2002]: time="2026-04-17T00:21:22.829489245Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 00:21:23.617044 kubelet[3317]: I0417 00:21:23.616952 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nflwm" podStartSLOduration=1.6169313010000002 podStartE2EDuration="1.616931301s" podCreationTimestamp="2026-04-17 00:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:21:23.616492916 +0000 UTC m=+7.249179766" watchObservedRunningTime="2026-04-17 00:21:23.616931301 +0000 UTC m=+7.249618145" Apr 17 00:21:24.082031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763021635.mount: Deactivated successfully. Apr 17 00:21:24.222822 update_engine[1963]: I20260417 00:21:24.222765 1963 update_attempter.cc:509] Updating boot flags... Apr 17 00:21:27.948536 containerd[2002]: time="2026-04-17T00:21:27.948480522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:27.950689 containerd[2002]: time="2026-04-17T00:21:27.950616451Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 00:21:27.953320 containerd[2002]: time="2026-04-17T00:21:27.953252454Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:27.956870 containerd[2002]: time="2026-04-17T00:21:27.956806002Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:27.957848 containerd[2002]: time="2026-04-17T00:21:27.957429555Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.127893979s" Apr 17 00:21:27.957848 containerd[2002]: time="2026-04-17T00:21:27.957466150Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 00:21:27.964081 containerd[2002]: time="2026-04-17T00:21:27.964035620Z" level=info msg="CreateContainer within sandbox \"e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 00:21:27.975994 containerd[2002]: time="2026-04-17T00:21:27.975947874Z" level=info msg="Container b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:27.991887 containerd[2002]: time="2026-04-17T00:21:27.991843521Z" level=info msg="CreateContainer within sandbox \"e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43\"" Apr 17 00:21:27.992487 containerd[2002]: time="2026-04-17T00:21:27.992455135Z" level=info msg="StartContainer for \"b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43\"" Apr 17 00:21:27.993395 containerd[2002]: time="2026-04-17T00:21:27.993365077Z" level=info msg="connecting to shim b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43" address="unix:///run/containerd/s/f982c24ea0bdaa46d0796efc40ada4e70ba41243fbc130284d1fe528039e4cc1" protocol=ttrpc version=3 Apr 17 00:21:28.022038 systemd[1]: Started cri-containerd-b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43.scope - libcontainer container b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43. Apr 17 00:21:28.062376 containerd[2002]: time="2026-04-17T00:21:28.062268675Z" level=info msg="StartContainer for \"b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43\" returns successfully" Apr 17 00:21:29.924617 kubelet[3317]: I0417 00:21:29.924009 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-26dsz" podStartSLOduration=2.792875939 podStartE2EDuration="7.923985502s" podCreationTimestamp="2026-04-17 00:21:22 +0000 UTC" firstStartedPulling="2026-04-17 00:21:22.827673376 +0000 UTC m=+6.460360214" lastFinishedPulling="2026-04-17 00:21:27.958782953 +0000 UTC m=+11.591469777" observedRunningTime="2026-04-17 00:21:28.615278655 +0000 UTC m=+12.247965501" watchObservedRunningTime="2026-04-17 00:21:29.923985502 +0000 UTC m=+13.556672347" Apr 17 00:21:35.205286 sudo[2346]: pam_unix(sudo:session): session closed for user root Apr 17 00:21:35.371816 sshd[2345]: Connection closed by 50.85.169.122 port 33962 Apr 17 00:21:35.375708 sshd-session[2342]: pam_unix(sshd:session): session closed for user core Apr 17 00:21:35.382313 systemd-logind[1959]: Session 7 logged out. Waiting for processes to exit. Apr 17 00:21:35.383179 systemd[1]: sshd@6-172.31.17.163:22-50.85.169.122:33962.service: Deactivated successfully. Apr 17 00:21:35.389136 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 00:21:35.390293 systemd[1]: session-7.scope: Consumed 5.321s CPU time, 170.8M memory peak. Apr 17 00:21:35.396644 systemd-logind[1959]: Removed session 7. Apr 17 00:21:39.548147 systemd[1]: Created slice kubepods-besteffort-pode0e13a4a_e550_42d1_a53e_033d0421364f.slice - libcontainer container kubepods-besteffort-pode0e13a4a_e550_42d1_a53e_033d0421364f.slice. Apr 17 00:21:39.577307 kubelet[3317]: I0417 00:21:39.577262 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0e13a4a-e550-42d1-a53e-033d0421364f-typha-certs\") pod \"calico-typha-77799b6ff-rmkvd\" (UID: \"e0e13a4a-e550-42d1-a53e-033d0421364f\") " pod="calico-system/calico-typha-77799b6ff-rmkvd" Apr 17 00:21:39.578888 kubelet[3317]: I0417 00:21:39.577318 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vt2x\" (UniqueName: \"kubernetes.io/projected/e0e13a4a-e550-42d1-a53e-033d0421364f-kube-api-access-7vt2x\") pod \"calico-typha-77799b6ff-rmkvd\" (UID: \"e0e13a4a-e550-42d1-a53e-033d0421364f\") " pod="calico-system/calico-typha-77799b6ff-rmkvd" Apr 17 00:21:39.578888 kubelet[3317]: I0417 00:21:39.577344 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0e13a4a-e550-42d1-a53e-033d0421364f-tigera-ca-bundle\") pod \"calico-typha-77799b6ff-rmkvd\" (UID: \"e0e13a4a-e550-42d1-a53e-033d0421364f\") " pod="calico-system/calico-typha-77799b6ff-rmkvd" Apr 17 00:21:39.727773 systemd[1]: Created slice kubepods-besteffort-podfc55f4d0_7353_4cfe_b2ec_64e935f00c6c.slice - libcontainer container kubepods-besteffort-podfc55f4d0_7353_4cfe_b2ec_64e935f00c6c.slice. Apr 17 00:21:39.779098 kubelet[3317]: I0417 00:21:39.779052 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-cni-log-dir\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.779429 kubelet[3317]: I0417 00:21:39.779396 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-lib-modules\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.779804 kubelet[3317]: I0417 00:21:39.779783 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-node-certs\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.779983 kubelet[3317]: I0417 00:21:39.779965 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-xtables-lock\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780250 kubelet[3317]: I0417 00:21:39.780214 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-policysync\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780339 kubelet[3317]: I0417 00:21:39.780250 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-tigera-ca-bundle\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780339 kubelet[3317]: I0417 00:21:39.780299 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-flexvol-driver-host\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780339 kubelet[3317]: I0417 00:21:39.780322 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-nodeproc\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780466 kubelet[3317]: I0417 00:21:39.780349 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-cni-net-dir\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780466 kubelet[3317]: I0417 00:21:39.780371 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-sys-fs\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780466 kubelet[3317]: I0417 00:21:39.780398 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-var-lib-calico\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780466 kubelet[3317]: I0417 00:21:39.780421 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-var-run-calico\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780466 kubelet[3317]: I0417 00:21:39.780446 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-bpffs\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780799 kubelet[3317]: I0417 00:21:39.780472 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-cni-bin-dir\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.780799 kubelet[3317]: I0417 00:21:39.780499 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5j5q\" (UniqueName: \"kubernetes.io/projected/fc55f4d0-7353-4cfe-b2ec-64e935f00c6c-kube-api-access-k5j5q\") pod \"calico-node-dbqlh\" (UID: \"fc55f4d0-7353-4cfe-b2ec-64e935f00c6c\") " pod="calico-system/calico-node-dbqlh" Apr 17 00:21:39.804714 kubelet[3317]: E0417 00:21:39.804153 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:39.864393 containerd[2002]: time="2026-04-17T00:21:39.864338874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77799b6ff-rmkvd,Uid:e0e13a4a-e550-42d1-a53e-033d0421364f,Namespace:calico-system,Attempt:0,}" Apr 17 00:21:39.881686 kubelet[3317]: I0417 00:21:39.881431 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e4c9621-343d-439f-bfb3-71c69fe08c37-registration-dir\") pod \"csi-node-driver-bvsrn\" (UID: \"6e4c9621-343d-439f-bfb3-71c69fe08c37\") " pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:21:39.881686 kubelet[3317]: I0417 00:21:39.881486 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45pb\" (UniqueName: \"kubernetes.io/projected/6e4c9621-343d-439f-bfb3-71c69fe08c37-kube-api-access-v45pb\") pod \"csi-node-driver-bvsrn\" (UID: \"6e4c9621-343d-439f-bfb3-71c69fe08c37\") " pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:21:39.881686 kubelet[3317]: I0417 00:21:39.881562 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e4c9621-343d-439f-bfb3-71c69fe08c37-socket-dir\") pod \"csi-node-driver-bvsrn\" (UID: \"6e4c9621-343d-439f-bfb3-71c69fe08c37\") " pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:21:39.881686 kubelet[3317]: I0417 00:21:39.881646 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e4c9621-343d-439f-bfb3-71c69fe08c37-kubelet-dir\") pod \"csi-node-driver-bvsrn\" (UID: \"6e4c9621-343d-439f-bfb3-71c69fe08c37\") " pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:21:39.882027 kubelet[3317]: I0417 00:21:39.881709 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e4c9621-343d-439f-bfb3-71c69fe08c37-varrun\") pod \"csi-node-driver-bvsrn\" (UID: \"6e4c9621-343d-439f-bfb3-71c69fe08c37\") " pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:21:39.893463 kubelet[3317]: E0417 00:21:39.892611 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.893463 kubelet[3317]: W0417 00:21:39.892641 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.893463 kubelet[3317]: E0417 00:21:39.892664 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.911998 kubelet[3317]: E0417 00:21:39.911969 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.911998 kubelet[3317]: W0417 00:21:39.911995 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.912197 kubelet[3317]: E0417 00:21:39.912019 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.924957 containerd[2002]: time="2026-04-17T00:21:39.924908477Z" level=info msg="connecting to shim db8ec836e7d9ef74347f06a1b0f2cb63118b06b6d00eff0b84c540466ecef7ed" address="unix:///run/containerd/s/e0444dcfc246e4df9c233b9bd215b16022ef6edb29077cf46e343f71f1cab611" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:39.959996 systemd[1]: Started cri-containerd-db8ec836e7d9ef74347f06a1b0f2cb63118b06b6d00eff0b84c540466ecef7ed.scope - libcontainer container db8ec836e7d9ef74347f06a1b0f2cb63118b06b6d00eff0b84c540466ecef7ed. Apr 17 00:21:39.983333 kubelet[3317]: E0417 00:21:39.983302 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.983333 kubelet[3317]: W0417 00:21:39.983327 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.983538 kubelet[3317]: E0417 00:21:39.983351 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.984073 kubelet[3317]: E0417 00:21:39.984041 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.984073 kubelet[3317]: W0417 00:21:39.984065 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.984217 kubelet[3317]: E0417 00:21:39.984081 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.984395 kubelet[3317]: E0417 00:21:39.984368 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.984395 kubelet[3317]: W0417 00:21:39.984394 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.984496 kubelet[3317]: E0417 00:21:39.984407 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.986055 kubelet[3317]: E0417 00:21:39.986025 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.986055 kubelet[3317]: W0417 00:21:39.986047 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.986055 kubelet[3317]: E0417 00:21:39.986064 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.986398 kubelet[3317]: E0417 00:21:39.986379 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.986398 kubelet[3317]: W0417 00:21:39.986393 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.986508 kubelet[3317]: E0417 00:21:39.986423 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.986676 kubelet[3317]: E0417 00:21:39.986646 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.986676 kubelet[3317]: W0417 00:21:39.986658 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.986799 kubelet[3317]: E0417 00:21:39.986685 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.987028 kubelet[3317]: E0417 00:21:39.987008 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.987028 kubelet[3317]: W0417 00:21:39.987022 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.987217 kubelet[3317]: E0417 00:21:39.987036 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.987434 kubelet[3317]: E0417 00:21:39.987412 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.987434 kubelet[3317]: W0417 00:21:39.987428 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.987672 kubelet[3317]: E0417 00:21:39.987441 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.987717 kubelet[3317]: E0417 00:21:39.987673 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.987717 kubelet[3317]: W0417 00:21:39.987695 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.987836 kubelet[3317]: E0417 00:21:39.987715 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.988001 kubelet[3317]: E0417 00:21:39.987979 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.988001 kubelet[3317]: W0417 00:21:39.987993 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.988103 kubelet[3317]: E0417 00:21:39.988006 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.988363 kubelet[3317]: E0417 00:21:39.988340 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.989754 kubelet[3317]: W0417 00:21:39.988358 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.989821 kubelet[3317]: E0417 00:21:39.989759 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.990093 kubelet[3317]: E0417 00:21:39.990072 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.990093 kubelet[3317]: W0417 00:21:39.990088 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.990197 kubelet[3317]: E0417 00:21:39.990108 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.990378 kubelet[3317]: E0417 00:21:39.990358 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.990431 kubelet[3317]: W0417 00:21:39.990379 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.990431 kubelet[3317]: E0417 00:21:39.990393 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.990670 kubelet[3317]: E0417 00:21:39.990649 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.990670 kubelet[3317]: W0417 00:21:39.990662 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.990817 kubelet[3317]: E0417 00:21:39.990676 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.991029 kubelet[3317]: E0417 00:21:39.991009 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.991090 kubelet[3317]: W0417 00:21:39.991023 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.991090 kubelet[3317]: E0417 00:21:39.991047 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.991391 kubelet[3317]: E0417 00:21:39.991369 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.991455 kubelet[3317]: W0417 00:21:39.991391 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.991455 kubelet[3317]: E0417 00:21:39.991405 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.991683 kubelet[3317]: E0417 00:21:39.991662 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.991683 kubelet[3317]: W0417 00:21:39.991676 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.991816 kubelet[3317]: E0417 00:21:39.991702 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.992084 kubelet[3317]: E0417 00:21:39.992063 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.992084 kubelet[3317]: W0417 00:21:39.992077 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.992184 kubelet[3317]: E0417 00:21:39.992090 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.992942 kubelet[3317]: E0417 00:21:39.992806 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.992942 kubelet[3317]: W0417 00:21:39.992821 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.992942 kubelet[3317]: E0417 00:21:39.992834 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.993158 kubelet[3317]: E0417 00:21:39.993147 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.993356 kubelet[3317]: W0417 00:21:39.993215 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.993356 kubelet[3317]: E0417 00:21:39.993231 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.993510 kubelet[3317]: E0417 00:21:39.993500 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.993584 kubelet[3317]: W0417 00:21:39.993574 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.993760 kubelet[3317]: E0417 00:21:39.993640 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.993958 kubelet[3317]: E0417 00:21:39.993944 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.994383 kubelet[3317]: W0417 00:21:39.993959 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.994383 kubelet[3317]: E0417 00:21:39.993972 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.994383 kubelet[3317]: E0417 00:21:39.994147 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.994383 kubelet[3317]: W0417 00:21:39.994156 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.994383 kubelet[3317]: E0417 00:21:39.994168 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.994383 kubelet[3317]: E0417 00:21:39.994361 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.994383 kubelet[3317]: W0417 00:21:39.994370 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.994383 kubelet[3317]: E0417 00:21:39.994381 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:39.994705 kubelet[3317]: E0417 00:21:39.994593 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:39.994705 kubelet[3317]: W0417 00:21:39.994604 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:39.994705 kubelet[3317]: E0417 00:21:39.994614 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:40.007489 kubelet[3317]: E0417 00:21:40.007447 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:40.007489 kubelet[3317]: W0417 00:21:40.007470 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:40.007489 kubelet[3317]: E0417 00:21:40.007492 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:40.033237 containerd[2002]: time="2026-04-17T00:21:40.033172790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77799b6ff-rmkvd,Uid:e0e13a4a-e550-42d1-a53e-033d0421364f,Namespace:calico-system,Attempt:0,} returns sandbox id \"db8ec836e7d9ef74347f06a1b0f2cb63118b06b6d00eff0b84c540466ecef7ed\"" Apr 17 00:21:40.035233 containerd[2002]: time="2026-04-17T00:21:40.035121475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 00:21:40.037042 containerd[2002]: time="2026-04-17T00:21:40.036995890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbqlh,Uid:fc55f4d0-7353-4cfe-b2ec-64e935f00c6c,Namespace:calico-system,Attempt:0,}" Apr 17 00:21:40.088833 containerd[2002]: time="2026-04-17T00:21:40.088490224Z" level=info msg="connecting to shim ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b" address="unix:///run/containerd/s/402c2c3f003f3daa43ec3ef863515ba94779008fc9cef4390dc7df93fbfdf17a" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:21:40.116952 systemd[1]: Started cri-containerd-ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b.scope - libcontainer container ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b. Apr 17 00:21:40.156703 containerd[2002]: time="2026-04-17T00:21:40.156658914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbqlh,Uid:fc55f4d0-7353-4cfe-b2ec-64e935f00c6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\"" Apr 17 00:21:41.374624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955907506.mount: Deactivated successfully. Apr 17 00:21:41.548104 kubelet[3317]: E0417 00:21:41.548055 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:43.168108 containerd[2002]: time="2026-04-17T00:21:43.168058095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:43.170717 containerd[2002]: time="2026-04-17T00:21:43.170524830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 00:21:43.173202 containerd[2002]: time="2026-04-17T00:21:43.173158273Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:43.177257 containerd[2002]: time="2026-04-17T00:21:43.177126614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:43.178822 containerd[2002]: time="2026-04-17T00:21:43.178529528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.143332468s" Apr 17 00:21:43.179077 containerd[2002]: time="2026-04-17T00:21:43.178968843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 00:21:43.180771 containerd[2002]: time="2026-04-17T00:21:43.180739207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 00:21:43.205154 containerd[2002]: time="2026-04-17T00:21:43.205107544Z" level=info msg="CreateContainer within sandbox \"db8ec836e7d9ef74347f06a1b0f2cb63118b06b6d00eff0b84c540466ecef7ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 00:21:43.223049 containerd[2002]: time="2026-04-17T00:21:43.223001091Z" level=info msg="Container ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:43.229643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308970894.mount: Deactivated successfully. Apr 17 00:21:43.242913 containerd[2002]: time="2026-04-17T00:21:43.242864108Z" level=info msg="CreateContainer within sandbox \"db8ec836e7d9ef74347f06a1b0f2cb63118b06b6d00eff0b84c540466ecef7ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80\"" Apr 17 00:21:43.243552 containerd[2002]: time="2026-04-17T00:21:43.243516017Z" level=info msg="StartContainer for \"ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80\"" Apr 17 00:21:43.250505 containerd[2002]: time="2026-04-17T00:21:43.250458635Z" level=info msg="connecting to shim ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80" address="unix:///run/containerd/s/e0444dcfc246e4df9c233b9bd215b16022ef6edb29077cf46e343f71f1cab611" protocol=ttrpc version=3 Apr 17 00:21:43.276071 systemd[1]: Started cri-containerd-ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80.scope - libcontainer container ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80. Apr 17 00:21:43.346903 containerd[2002]: time="2026-04-17T00:21:43.346834940Z" level=info msg="StartContainer for \"ae47f71a982f1c554dcd0551f2a53f28171ca4082aa00055d7332fffedb08d80\" returns successfully" Apr 17 00:21:43.548427 kubelet[3317]: E0417 00:21:43.548359 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:43.691770 kubelet[3317]: E0417 00:21:43.691715 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.691770 kubelet[3317]: W0417 00:21:43.691766 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.692070 kubelet[3317]: E0417 00:21:43.691790 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.692963 kubelet[3317]: E0417 00:21:43.692940 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.692963 kubelet[3317]: W0417 00:21:43.692961 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.693118 kubelet[3317]: E0417 00:21:43.692978 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.693257 kubelet[3317]: E0417 00:21:43.693240 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.693310 kubelet[3317]: W0417 00:21:43.693257 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.693310 kubelet[3317]: E0417 00:21:43.693287 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.693576 kubelet[3317]: E0417 00:21:43.693559 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.693641 kubelet[3317]: W0417 00:21:43.693576 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.693641 kubelet[3317]: E0417 00:21:43.693600 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.694775 kubelet[3317]: E0417 00:21:43.693837 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.694775 kubelet[3317]: W0417 00:21:43.693859 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.694775 kubelet[3317]: E0417 00:21:43.693888 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.695893 kubelet[3317]: E0417 00:21:43.695872 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.695969 kubelet[3317]: W0417 00:21:43.695903 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.695969 kubelet[3317]: E0417 00:21:43.695918 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.696161 kubelet[3317]: E0417 00:21:43.696146 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.696211 kubelet[3317]: W0417 00:21:43.696161 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.696211 kubelet[3317]: E0417 00:21:43.696173 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.696422 kubelet[3317]: E0417 00:21:43.696408 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.696474 kubelet[3317]: W0417 00:21:43.696422 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.696474 kubelet[3317]: E0417 00:21:43.696434 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.696679 kubelet[3317]: E0417 00:21:43.696666 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.696759 kubelet[3317]: W0417 00:21:43.696679 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.696759 kubelet[3317]: E0417 00:21:43.696691 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.696961 kubelet[3317]: E0417 00:21:43.696946 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.697010 kubelet[3317]: W0417 00:21:43.696961 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.697010 kubelet[3317]: E0417 00:21:43.696989 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.697206 kubelet[3317]: E0417 00:21:43.697192 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.697258 kubelet[3317]: W0417 00:21:43.697206 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.697258 kubelet[3317]: E0417 00:21:43.697218 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.697455 kubelet[3317]: E0417 00:21:43.697442 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.697513 kubelet[3317]: W0417 00:21:43.697456 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.697513 kubelet[3317]: E0417 00:21:43.697468 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.699951 kubelet[3317]: E0417 00:21:43.699931 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.699951 kubelet[3317]: W0417 00:21:43.699950 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.700097 kubelet[3317]: E0417 00:21:43.699974 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.700199 kubelet[3317]: E0417 00:21:43.700186 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.700254 kubelet[3317]: W0417 00:21:43.700201 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.700254 kubelet[3317]: E0417 00:21:43.700213 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.700746 kubelet[3317]: E0417 00:21:43.700406 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.700746 kubelet[3317]: W0417 00:21:43.700417 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.700746 kubelet[3317]: E0417 00:21:43.700428 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.718947 kubelet[3317]: E0417 00:21:43.718853 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.718947 kubelet[3317]: W0417 00:21:43.718880 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.718947 kubelet[3317]: E0417 00:21:43.718901 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.722769 kubelet[3317]: E0417 00:21:43.720902 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.722873 kubelet[3317]: W0417 00:21:43.722776 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.722873 kubelet[3317]: E0417 00:21:43.722807 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.723228 kubelet[3317]: E0417 00:21:43.723207 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.723303 kubelet[3317]: W0417 00:21:43.723238 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.723303 kubelet[3317]: E0417 00:21:43.723258 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.723569 kubelet[3317]: E0417 00:21:43.723555 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.723569 kubelet[3317]: W0417 00:21:43.723569 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.723683 kubelet[3317]: E0417 00:21:43.723582 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.723954 kubelet[3317]: E0417 00:21:43.723939 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.724025 kubelet[3317]: W0417 00:21:43.723954 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.724025 kubelet[3317]: E0417 00:21:43.723975 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.724245 kubelet[3317]: E0417 00:21:43.724232 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.724301 kubelet[3317]: W0417 00:21:43.724246 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.724301 kubelet[3317]: E0417 00:21:43.724258 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.724498 kubelet[3317]: E0417 00:21:43.724484 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.724557 kubelet[3317]: W0417 00:21:43.724500 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.724557 kubelet[3317]: E0417 00:21:43.724515 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.724870 kubelet[3317]: E0417 00:21:43.724848 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.724870 kubelet[3317]: W0417 00:21:43.724866 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.724978 kubelet[3317]: E0417 00:21:43.724880 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.726289 kubelet[3317]: E0417 00:21:43.726273 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.726289 kubelet[3317]: W0417 00:21:43.726288 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.726501 kubelet[3317]: E0417 00:21:43.726305 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.726550 kubelet[3317]: E0417 00:21:43.726497 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.726550 kubelet[3317]: W0417 00:21:43.726519 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.726550 kubelet[3317]: E0417 00:21:43.726534 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.726830 kubelet[3317]: E0417 00:21:43.726816 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.726904 kubelet[3317]: W0417 00:21:43.726831 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.726904 kubelet[3317]: E0417 00:21:43.726847 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.727214 kubelet[3317]: E0417 00:21:43.727198 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.727214 kubelet[3317]: W0417 00:21:43.727214 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.727323 kubelet[3317]: E0417 00:21:43.727227 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.727566 kubelet[3317]: E0417 00:21:43.727551 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.727566 kubelet[3317]: W0417 00:21:43.727565 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.727797 kubelet[3317]: E0417 00:21:43.727579 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.729870 kubelet[3317]: E0417 00:21:43.729824 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.729870 kubelet[3317]: W0417 00:21:43.729841 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.729870 kubelet[3317]: E0417 00:21:43.729863 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.730451 kubelet[3317]: E0417 00:21:43.730434 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.730451 kubelet[3317]: W0417 00:21:43.730451 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.730957 kubelet[3317]: E0417 00:21:43.730464 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.730957 kubelet[3317]: E0417 00:21:43.730696 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.730957 kubelet[3317]: W0417 00:21:43.730706 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.730957 kubelet[3317]: E0417 00:21:43.730748 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.731277 kubelet[3317]: E0417 00:21:43.731134 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.731277 kubelet[3317]: W0417 00:21:43.731146 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.731277 kubelet[3317]: E0417 00:21:43.731159 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.731406 kubelet[3317]: E0417 00:21:43.731364 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:43.731406 kubelet[3317]: W0417 00:21:43.731373 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:43.731406 kubelet[3317]: E0417 00:21:43.731384 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:43.760833 kubelet[3317]: I0417 00:21:43.760752 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77799b6ff-rmkvd" podStartSLOduration=1.615386355 podStartE2EDuration="4.760706518s" podCreationTimestamp="2026-04-17 00:21:39 +0000 UTC" firstStartedPulling="2026-04-17 00:21:40.034689287 +0000 UTC m=+23.667376120" lastFinishedPulling="2026-04-17 00:21:43.180009455 +0000 UTC m=+26.812696283" observedRunningTime="2026-04-17 00:21:43.760545202 +0000 UTC m=+27.393232049" watchObservedRunningTime="2026-04-17 00:21:43.760706518 +0000 UTC m=+27.393393363" Apr 17 00:21:44.567304 containerd[2002]: time="2026-04-17T00:21:44.567252052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:44.569272 containerd[2002]: time="2026-04-17T00:21:44.569234821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 00:21:44.571784 containerd[2002]: time="2026-04-17T00:21:44.571746992Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:44.576047 containerd[2002]: time="2026-04-17T00:21:44.575960934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:44.576799 containerd[2002]: time="2026-04-17T00:21:44.576758660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.395984052s" Apr 17 00:21:44.577064 containerd[2002]: time="2026-04-17T00:21:44.576808655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 00:21:44.585247 containerd[2002]: time="2026-04-17T00:21:44.584888507Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 00:21:44.605030 containerd[2002]: time="2026-04-17T00:21:44.603890936Z" level=info msg="Container ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:44.620290 containerd[2002]: time="2026-04-17T00:21:44.620240189Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149\"" Apr 17 00:21:44.621078 containerd[2002]: time="2026-04-17T00:21:44.621044990Z" level=info msg="StartContainer for \"ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149\"" Apr 17 00:21:44.623597 containerd[2002]: time="2026-04-17T00:21:44.623556123Z" level=info msg="connecting to shim ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149" address="unix:///run/containerd/s/402c2c3f003f3daa43ec3ef863515ba94779008fc9cef4390dc7df93fbfdf17a" protocol=ttrpc version=3 Apr 17 00:21:44.650936 systemd[1]: Started cri-containerd-ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149.scope - libcontainer container ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149. Apr 17 00:21:44.710933 kubelet[3317]: E0417 00:21:44.710908 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.712041 kubelet[3317]: W0417 00:21:44.711679 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.712041 kubelet[3317]: E0417 00:21:44.711770 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.712385 kubelet[3317]: E0417 00:21:44.712150 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.712385 kubelet[3317]: W0417 00:21:44.712163 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.712385 kubelet[3317]: E0417 00:21:44.712181 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.712661 kubelet[3317]: E0417 00:21:44.712462 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.712661 kubelet[3317]: W0417 00:21:44.712473 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.712661 kubelet[3317]: E0417 00:21:44.712485 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.717020 kubelet[3317]: E0417 00:21:44.716939 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.717020 kubelet[3317]: W0417 00:21:44.716963 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.717020 kubelet[3317]: E0417 00:21:44.716987 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.719460 kubelet[3317]: E0417 00:21:44.719172 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.719460 kubelet[3317]: W0417 00:21:44.719188 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.719460 kubelet[3317]: E0417 00:21:44.719208 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.720442 kubelet[3317]: E0417 00:21:44.720424 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.720641 kubelet[3317]: W0417 00:21:44.720534 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.720641 kubelet[3317]: E0417 00:21:44.720554 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.720947 kubelet[3317]: E0417 00:21:44.720824 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.720947 kubelet[3317]: W0417 00:21:44.720838 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.720947 kubelet[3317]: E0417 00:21:44.720850 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.721895 kubelet[3317]: E0417 00:21:44.721792 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.721895 kubelet[3317]: W0417 00:21:44.721805 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.721895 kubelet[3317]: E0417 00:21:44.721819 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.722290 kubelet[3317]: E0417 00:21:44.722226 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.722290 kubelet[3317]: W0417 00:21:44.722237 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.722290 kubelet[3317]: E0417 00:21:44.722250 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.723876 kubelet[3317]: E0417 00:21:44.723798 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.723876 kubelet[3317]: W0417 00:21:44.723815 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.723876 kubelet[3317]: E0417 00:21:44.723828 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.724316 kubelet[3317]: E0417 00:21:44.724240 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.724316 kubelet[3317]: W0417 00:21:44.724254 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.724316 kubelet[3317]: E0417 00:21:44.724267 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.725738 kubelet[3317]: E0417 00:21:44.724818 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.725738 kubelet[3317]: W0417 00:21:44.724832 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.725738 kubelet[3317]: E0417 00:21:44.724845 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.726129 kubelet[3317]: E0417 00:21:44.726113 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.726129 kubelet[3317]: W0417 00:21:44.726130 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.726244 kubelet[3317]: E0417 00:21:44.726162 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.726425 kubelet[3317]: E0417 00:21:44.726411 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.726484 kubelet[3317]: W0417 00:21:44.726425 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.726484 kubelet[3317]: E0417 00:21:44.726438 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.726674 kubelet[3317]: E0417 00:21:44.726660 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.726755 kubelet[3317]: W0417 00:21:44.726674 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.726755 kubelet[3317]: E0417 00:21:44.726689 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.731856 kubelet[3317]: E0417 00:21:44.731829 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.732829 kubelet[3317]: W0417 00:21:44.731990 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.733169 kubelet[3317]: E0417 00:21:44.733007 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.734173 kubelet[3317]: E0417 00:21:44.733997 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.734439 kubelet[3317]: W0417 00:21:44.734421 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.734933 kubelet[3317]: E0417 00:21:44.734782 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.735712 kubelet[3317]: E0417 00:21:44.735697 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.736442 kubelet[3317]: W0417 00:21:44.735811 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.736442 kubelet[3317]: E0417 00:21:44.735833 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.737063 kubelet[3317]: E0417 00:21:44.736874 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.737859 kubelet[3317]: W0417 00:21:44.737254 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.737859 kubelet[3317]: E0417 00:21:44.737281 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.742731 kubelet[3317]: E0417 00:21:44.742056 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.742731 kubelet[3317]: W0417 00:21:44.742078 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.742731 kubelet[3317]: E0417 00:21:44.742101 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.746004 kubelet[3317]: E0417 00:21:44.744992 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.746004 kubelet[3317]: W0417 00:21:44.745105 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.746004 kubelet[3317]: E0417 00:21:44.745133 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.748904 kubelet[3317]: E0417 00:21:44.748538 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.748904 kubelet[3317]: W0417 00:21:44.748558 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.748904 kubelet[3317]: E0417 00:21:44.748577 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.749870 kubelet[3317]: E0417 00:21:44.749698 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.753702 kubelet[3317]: W0417 00:21:44.749718 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.753702 kubelet[3317]: E0417 00:21:44.752877 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.753702 kubelet[3317]: E0417 00:21:44.753555 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.753702 kubelet[3317]: W0417 00:21:44.753569 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.753702 kubelet[3317]: E0417 00:21:44.753585 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.754255 kubelet[3317]: E0417 00:21:44.754084 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.754255 kubelet[3317]: W0417 00:21:44.754099 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.754255 kubelet[3317]: E0417 00:21:44.754114 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.755062 kubelet[3317]: E0417 00:21:44.755031 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.756643 kubelet[3317]: W0417 00:21:44.756282 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.756643 kubelet[3317]: E0417 00:21:44.756313 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.757680 kubelet[3317]: E0417 00:21:44.757540 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.757680 kubelet[3317]: W0417 00:21:44.757552 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.757680 kubelet[3317]: E0417 00:21:44.757567 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.758880 kubelet[3317]: E0417 00:21:44.758277 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.758880 kubelet[3317]: W0417 00:21:44.758395 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.758880 kubelet[3317]: E0417 00:21:44.758412 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.759556 kubelet[3317]: E0417 00:21:44.759538 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.760745 kubelet[3317]: W0417 00:21:44.759647 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.760745 kubelet[3317]: E0417 00:21:44.759685 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.761288 kubelet[3317]: E0417 00:21:44.761222 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.761288 kubelet[3317]: W0417 00:21:44.761238 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.761288 kubelet[3317]: E0417 00:21:44.761253 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.762064 kubelet[3317]: E0417 00:21:44.762017 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.762064 kubelet[3317]: W0417 00:21:44.762034 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.762064 kubelet[3317]: E0417 00:21:44.762049 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.763754 kubelet[3317]: E0417 00:21:44.763188 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.763873 kubelet[3317]: W0417 00:21:44.763857 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.764778 kubelet[3317]: E0417 00:21:44.764758 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.765189 kubelet[3317]: E0417 00:21:44.765174 3317 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:21:44.765278 kubelet[3317]: W0417 00:21:44.765266 3317 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:21:44.765376 kubelet[3317]: E0417 00:21:44.765346 3317 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:21:44.790651 systemd[1]: cri-containerd-ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149.scope: Deactivated successfully. Apr 17 00:21:44.795917 containerd[2002]: time="2026-04-17T00:21:44.795838912Z" level=info msg="StartContainer for \"ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149\" returns successfully" Apr 17 00:21:44.863340 containerd[2002]: time="2026-04-17T00:21:44.863200792Z" level=info msg="received container exit event container_id:\"ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149\" id:\"ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149\" pid:4150 exited_at:{seconds:1776385304 nanos:802593314}" Apr 17 00:21:44.904987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddad141252f4ef9f17d7c95efc084ff3c76bb4c54c2e1d373a276b1f3a984149-rootfs.mount: Deactivated successfully. Apr 17 00:21:45.548069 kubelet[3317]: E0417 00:21:45.548011 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:45.718212 containerd[2002]: time="2026-04-17T00:21:45.718171044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 00:21:47.548292 kubelet[3317]: E0417 00:21:47.548227 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:49.548776 kubelet[3317]: E0417 00:21:49.548699 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:51.548136 kubelet[3317]: E0417 00:21:51.548082 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:53.548209 kubelet[3317]: E0417 00:21:53.548134 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:55.548172 kubelet[3317]: E0417 00:21:55.548120 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:56.976897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904551085.mount: Deactivated successfully. Apr 17 00:21:57.036969 containerd[2002]: time="2026-04-17T00:21:57.031983371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:57.038009 containerd[2002]: time="2026-04-17T00:21:57.037955275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 00:21:57.038513 containerd[2002]: time="2026-04-17T00:21:57.038375597Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:57.041765 containerd[2002]: time="2026-04-17T00:21:57.041702671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:57.042540 containerd[2002]: time="2026-04-17T00:21:57.042392485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.324169413s" Apr 17 00:21:57.042540 containerd[2002]: time="2026-04-17T00:21:57.042430003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 00:21:57.049999 containerd[2002]: time="2026-04-17T00:21:57.049953284Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 00:21:57.069011 containerd[2002]: time="2026-04-17T00:21:57.068800335Z" level=info msg="Container 8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:21:57.075657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126062141.mount: Deactivated successfully. Apr 17 00:21:57.103592 containerd[2002]: time="2026-04-17T00:21:57.103543543Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196\"" Apr 17 00:21:57.115869 containerd[2002]: time="2026-04-17T00:21:57.115817279Z" level=info msg="StartContainer for \"8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196\"" Apr 17 00:21:57.117400 containerd[2002]: time="2026-04-17T00:21:57.117359985Z" level=info msg="connecting to shim 8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196" address="unix:///run/containerd/s/402c2c3f003f3daa43ec3ef863515ba94779008fc9cef4390dc7df93fbfdf17a" protocol=ttrpc version=3 Apr 17 00:21:57.196943 systemd[1]: Started cri-containerd-8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196.scope - libcontainer container 8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196. Apr 17 00:21:57.274081 containerd[2002]: time="2026-04-17T00:21:57.273955893Z" level=info msg="StartContainer for \"8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196\" returns successfully" Apr 17 00:21:57.351090 systemd[1]: cri-containerd-8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196.scope: Deactivated successfully. Apr 17 00:21:57.351619 systemd[1]: cri-containerd-8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196.scope: Consumed 76ms CPU time, 23.3M memory peak, 2.8M read from disk. Apr 17 00:21:57.353293 containerd[2002]: time="2026-04-17T00:21:57.353221611Z" level=info msg="received container exit event container_id:\"8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196\" id:\"8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196\" pid:4242 exited_at:{seconds:1776385317 nanos:352922665}" Apr 17 00:21:57.548662 kubelet[3317]: E0417 00:21:57.548506 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:21:57.758717 containerd[2002]: time="2026-04-17T00:21:57.758651917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 00:21:57.977166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f74a98398f9fcd8aae21df7c01ce69a70c98ba48616d37b88d20c62add83196-rootfs.mount: Deactivated successfully. Apr 17 00:21:59.548967 kubelet[3317]: E0417 00:21:59.548509 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:22:01.338204 containerd[2002]: time="2026-04-17T00:22:01.338140945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:01.340886 containerd[2002]: time="2026-04-17T00:22:01.340838170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 00:22:01.344998 containerd[2002]: time="2026-04-17T00:22:01.344919091Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:01.356173 containerd[2002]: time="2026-04-17T00:22:01.355923224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:01.356930 containerd[2002]: time="2026-04-17T00:22:01.356890529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.598008018s" Apr 17 00:22:01.357961 containerd[2002]: time="2026-04-17T00:22:01.357019651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 00:22:01.366837 containerd[2002]: time="2026-04-17T00:22:01.366784155Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 00:22:01.390753 containerd[2002]: time="2026-04-17T00:22:01.389890150Z" level=info msg="Container f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:01.410273 containerd[2002]: time="2026-04-17T00:22:01.410224175Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16\"" Apr 17 00:22:01.411767 containerd[2002]: time="2026-04-17T00:22:01.411087584Z" level=info msg="StartContainer for \"f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16\"" Apr 17 00:22:01.413048 containerd[2002]: time="2026-04-17T00:22:01.413011968Z" level=info msg="connecting to shim f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16" address="unix:///run/containerd/s/402c2c3f003f3daa43ec3ef863515ba94779008fc9cef4390dc7df93fbfdf17a" protocol=ttrpc version=3 Apr 17 00:22:01.444577 systemd[1]: Started cri-containerd-f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16.scope - libcontainer container f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16. Apr 17 00:22:01.533781 containerd[2002]: time="2026-04-17T00:22:01.533714671Z" level=info msg="StartContainer for \"f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16\" returns successfully" Apr 17 00:22:01.548695 kubelet[3317]: E0417 00:22:01.548583 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:22:02.694514 systemd[1]: cri-containerd-f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16.scope: Deactivated successfully. Apr 17 00:22:02.694880 systemd[1]: cri-containerd-f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16.scope: Consumed 695ms CPU time, 177.9M memory peak, 4.2M read from disk, 177M written to disk. Apr 17 00:22:02.818914 containerd[2002]: time="2026-04-17T00:22:02.818614061Z" level=info msg="received container exit event container_id:\"f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16\" id:\"f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16\" pid:4300 exited_at:{seconds:1776385322 nanos:818004408}" Apr 17 00:22:02.868578 kubelet[3317]: I0417 00:22:02.868473 3317 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 00:22:02.872164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f186d72d4e9a6c711164c40320841273c6636f083410f6fec00e7b6998a00b16-rootfs.mount: Deactivated successfully. Apr 17 00:22:02.992689 systemd[1]: Created slice kubepods-burstable-podbcf739fd_30d6_4de6_aa6a_a1d7e5ed1cfc.slice - libcontainer container kubepods-burstable-podbcf739fd_30d6_4de6_aa6a_a1d7e5ed1cfc.slice. Apr 17 00:22:03.011241 kubelet[3317]: E0417 00:22:03.011193 3317 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-17-163\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-163' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 17 00:22:03.011575 kubelet[3317]: E0417 00:22:03.011532 3317 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-17-163\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-163' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"goldmane-key-pair\"" type="*v1.Secret" Apr 17 00:22:03.011804 kubelet[3317]: E0417 00:22:03.011611 3317 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-17-163\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-163' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"goldmane-ca-bundle\"" type="*v1.ConfigMap" Apr 17 00:22:03.011804 kubelet[3317]: E0417 00:22:03.011698 3317 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ip-172-31-17-163\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-163' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"goldmane\"" type="*v1.ConfigMap" Apr 17 00:22:03.013258 systemd[1]: Created slice kubepods-burstable-podfaf227a2_c41a_476d_ac2e_763e2502ebdb.slice - libcontainer container kubepods-burstable-podfaf227a2_c41a_476d_ac2e_763e2502ebdb.slice. Apr 17 00:22:03.027208 systemd[1]: Created slice kubepods-besteffort-pod5c67ec73_7d3f_4924_974e_10ac71826e12.slice - libcontainer container kubepods-besteffort-pod5c67ec73_7d3f_4924_974e_10ac71826e12.slice. Apr 17 00:22:03.037854 systemd[1]: Created slice kubepods-besteffort-pod4c4701d9_9047_4448_a009_ce8fbc675f90.slice - libcontainer container kubepods-besteffort-pod4c4701d9_9047_4448_a009_ce8fbc675f90.slice. Apr 17 00:22:03.052575 systemd[1]: Created slice kubepods-besteffort-pod2f68e2cd_6388_416b_9cb5_5cf309947192.slice - libcontainer container kubepods-besteffort-pod2f68e2cd_6388_416b_9cb5_5cf309947192.slice. Apr 17 00:22:03.063903 systemd[1]: Created slice kubepods-besteffort-podcc519b59_dfc2_4b7e_ba52_f6ef50a332cb.slice - libcontainer container kubepods-besteffort-podcc519b59_dfc2_4b7e_ba52_f6ef50a332cb.slice. Apr 17 00:22:03.076237 systemd[1]: Created slice kubepods-besteffort-poded174bd2_a458_4cc7_9616_028f48dff565.slice - libcontainer container kubepods-besteffort-poded174bd2_a458_4cc7_9616_028f48dff565.slice. Apr 17 00:22:03.079295 kubelet[3317]: I0417 00:22:03.079252 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faf227a2-c41a-476d-ac2e-763e2502ebdb-config-volume\") pod \"coredns-66bc5c9577-krz7c\" (UID: \"faf227a2-c41a-476d-ac2e-763e2502ebdb\") " pod="kube-system/coredns-66bc5c9577-krz7c" Apr 17 00:22:03.079295 kubelet[3317]: I0417 00:22:03.079291 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-nginx-config\") pod \"whisker-6d4d67954c-fwtv2\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " pod="calico-system/whisker-6d4d67954c-fwtv2" Apr 17 00:22:03.079479 kubelet[3317]: I0417 00:22:03.079327 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-ca-bundle\") pod \"whisker-6d4d67954c-fwtv2\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " pod="calico-system/whisker-6d4d67954c-fwtv2" Apr 17 00:22:03.079479 kubelet[3317]: I0417 00:22:03.079352 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsxv\" (UniqueName: \"kubernetes.io/projected/2f68e2cd-6388-416b-9cb5-5cf309947192-kube-api-access-4lsxv\") pod \"calico-apiserver-878d7484f-md97m\" (UID: \"2f68e2cd-6388-416b-9cb5-5cf309947192\") " pod="calico-system/calico-apiserver-878d7484f-md97m" Apr 17 00:22:03.079479 kubelet[3317]: I0417 00:22:03.079381 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc-config-volume\") pod \"coredns-66bc5c9577-dr688\" (UID: \"bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc\") " pod="kube-system/coredns-66bc5c9577-dr688" Apr 17 00:22:03.079479 kubelet[3317]: I0417 00:22:03.079406 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-config\") pod \"goldmane-cccfbd5cf-8djnz\" (UID: \"cc519b59-dfc2-4b7e-ba52-f6ef50a332cb\") " pod="calico-system/goldmane-cccfbd5cf-8djnz" Apr 17 00:22:03.079479 kubelet[3317]: I0417 00:22:03.079426 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxpmm\" (UniqueName: \"kubernetes.io/projected/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-kube-api-access-zxpmm\") pod \"goldmane-cccfbd5cf-8djnz\" (UID: \"cc519b59-dfc2-4b7e-ba52-f6ef50a332cb\") " pod="calico-system/goldmane-cccfbd5cf-8djnz" Apr 17 00:22:03.079707 kubelet[3317]: I0417 00:22:03.079451 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-backend-key-pair\") pod \"whisker-6d4d67954c-fwtv2\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " pod="calico-system/whisker-6d4d67954c-fwtv2" Apr 17 00:22:03.079707 kubelet[3317]: I0417 00:22:03.079478 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f68e2cd-6388-416b-9cb5-5cf309947192-calico-apiserver-certs\") pod \"calico-apiserver-878d7484f-md97m\" (UID: \"2f68e2cd-6388-416b-9cb5-5cf309947192\") " pod="calico-system/calico-apiserver-878d7484f-md97m" Apr 17 00:22:03.079707 kubelet[3317]: I0417 00:22:03.079505 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hcj8\" (UniqueName: \"kubernetes.io/projected/4c4701d9-9047-4448-a009-ce8fbc675f90-kube-api-access-5hcj8\") pod \"calico-apiserver-878d7484f-ngljs\" (UID: \"4c4701d9-9047-4448-a009-ce8fbc675f90\") " pod="calico-system/calico-apiserver-878d7484f-ngljs" Apr 17 00:22:03.079707 kubelet[3317]: I0417 00:22:03.079535 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzpmz\" (UniqueName: \"kubernetes.io/projected/bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc-kube-api-access-hzpmz\") pod \"coredns-66bc5c9577-dr688\" (UID: \"bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc\") " pod="kube-system/coredns-66bc5c9577-dr688" Apr 17 00:22:03.079707 kubelet[3317]: I0417 00:22:03.079560 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhhc\" (UniqueName: \"kubernetes.io/projected/faf227a2-c41a-476d-ac2e-763e2502ebdb-kube-api-access-bvhhc\") pod \"coredns-66bc5c9577-krz7c\" (UID: \"faf227a2-c41a-476d-ac2e-763e2502ebdb\") " pod="kube-system/coredns-66bc5c9577-krz7c" Apr 17 00:22:03.080121 kubelet[3317]: I0417 00:22:03.079583 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-8djnz\" (UID: \"cc519b59-dfc2-4b7e-ba52-f6ef50a332cb\") " pod="calico-system/goldmane-cccfbd5cf-8djnz" Apr 17 00:22:03.080121 kubelet[3317]: I0417 00:22:03.079607 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c4701d9-9047-4448-a009-ce8fbc675f90-calico-apiserver-certs\") pod \"calico-apiserver-878d7484f-ngljs\" (UID: \"4c4701d9-9047-4448-a009-ce8fbc675f90\") " pod="calico-system/calico-apiserver-878d7484f-ngljs" Apr 17 00:22:03.080121 kubelet[3317]: I0417 00:22:03.079637 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-8djnz\" (UID: \"cc519b59-dfc2-4b7e-ba52-f6ef50a332cb\") " pod="calico-system/goldmane-cccfbd5cf-8djnz" Apr 17 00:22:03.080121 kubelet[3317]: I0417 00:22:03.079665 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjdh\" (UniqueName: \"kubernetes.io/projected/ed174bd2-a458-4cc7-9616-028f48dff565-kube-api-access-kpjdh\") pod \"whisker-6d4d67954c-fwtv2\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " pod="calico-system/whisker-6d4d67954c-fwtv2" Apr 17 00:22:03.080121 kubelet[3317]: I0417 00:22:03.079691 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c67ec73-7d3f-4924-974e-10ac71826e12-tigera-ca-bundle\") pod \"calico-kube-controllers-7cdb595876-j56j2\" (UID: \"5c67ec73-7d3f-4924-974e-10ac71826e12\") " pod="calico-system/calico-kube-controllers-7cdb595876-j56j2" Apr 17 00:22:03.081959 kubelet[3317]: I0417 00:22:03.079716 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxmd\" (UniqueName: \"kubernetes.io/projected/5c67ec73-7d3f-4924-974e-10ac71826e12-kube-api-access-txxmd\") pod \"calico-kube-controllers-7cdb595876-j56j2\" (UID: \"5c67ec73-7d3f-4924-974e-10ac71826e12\") " pod="calico-system/calico-kube-controllers-7cdb595876-j56j2" Apr 17 00:22:03.318432 containerd[2002]: time="2026-04-17T00:22:03.317254565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dr688,Uid:bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc,Namespace:kube-system,Attempt:0,}" Apr 17 00:22:03.322893 containerd[2002]: time="2026-04-17T00:22:03.322655401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krz7c,Uid:faf227a2-c41a-476d-ac2e-763e2502ebdb,Namespace:kube-system,Attempt:0,}" Apr 17 00:22:03.348472 containerd[2002]: time="2026-04-17T00:22:03.348428458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdb595876-j56j2,Uid:5c67ec73-7d3f-4924-974e-10ac71826e12,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:03.385145 containerd[2002]: time="2026-04-17T00:22:03.385017310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d4d67954c-fwtv2,Uid:ed174bd2-a458-4cc7-9616-028f48dff565,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:03.556212 systemd[1]: Created slice kubepods-besteffort-pod6e4c9621_343d_439f_bfb3_71c69fe08c37.slice - libcontainer container kubepods-besteffort-pod6e4c9621_343d_439f_bfb3_71c69fe08c37.slice. Apr 17 00:22:03.567219 containerd[2002]: time="2026-04-17T00:22:03.567177249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvsrn,Uid:6e4c9621-343d-439f-bfb3-71c69fe08c37,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:03.831857 containerd[2002]: time="2026-04-17T00:22:03.831343236Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 00:22:03.893324 containerd[2002]: time="2026-04-17T00:22:03.892866277Z" level=info msg="Container e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:03.915885 containerd[2002]: time="2026-04-17T00:22:03.915831932Z" level=error msg="Failed to destroy network for sandbox \"34ff430dfc699b81b1f0c11fbfe35ed2d0e5f41aadb6f351acd907f29b0ad083\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.923604 systemd[1]: run-netns-cni\x2d099da7b0\x2d2681\x2d5628\x2de533\x2d4d5a47077b6d.mount: Deactivated successfully. Apr 17 00:22:03.925165 containerd[2002]: time="2026-04-17T00:22:03.924885946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d4d67954c-fwtv2,Uid:ed174bd2-a458-4cc7-9616-028f48dff565,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff430dfc699b81b1f0c11fbfe35ed2d0e5f41aadb6f351acd907f29b0ad083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.933716 containerd[2002]: time="2026-04-17T00:22:03.933664153Z" level=info msg="CreateContainer within sandbox \"ee2fb0586caec9cc2e40ce7560e0675e2084427fb0d4e98f4c8309e4ac33dc1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f\"" Apr 17 00:22:03.946491 kubelet[3317]: E0417 00:22:03.945684 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff430dfc699b81b1f0c11fbfe35ed2d0e5f41aadb6f351acd907f29b0ad083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.946491 kubelet[3317]: E0417 00:22:03.945785 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff430dfc699b81b1f0c11fbfe35ed2d0e5f41aadb6f351acd907f29b0ad083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d4d67954c-fwtv2" Apr 17 00:22:03.946491 kubelet[3317]: E0417 00:22:03.945814 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff430dfc699b81b1f0c11fbfe35ed2d0e5f41aadb6f351acd907f29b0ad083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d4d67954c-fwtv2" Apr 17 00:22:03.947055 kubelet[3317]: E0417 00:22:03.945882 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d4d67954c-fwtv2_calico-system(ed174bd2-a458-4cc7-9616-028f48dff565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d4d67954c-fwtv2_calico-system(ed174bd2-a458-4cc7-9616-028f48dff565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34ff430dfc699b81b1f0c11fbfe35ed2d0e5f41aadb6f351acd907f29b0ad083\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d4d67954c-fwtv2" podUID="ed174bd2-a458-4cc7-9616-028f48dff565" Apr 17 00:22:03.948742 containerd[2002]: time="2026-04-17T00:22:03.948484044Z" level=info msg="StartContainer for \"e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f\"" Apr 17 00:22:03.953213 containerd[2002]: time="2026-04-17T00:22:03.953164165Z" level=info msg="connecting to shim e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f" address="unix:///run/containerd/s/402c2c3f003f3daa43ec3ef863515ba94779008fc9cef4390dc7df93fbfdf17a" protocol=ttrpc version=3 Apr 17 00:22:03.976746 containerd[2002]: time="2026-04-17T00:22:03.976683940Z" level=error msg="Failed to destroy network for sandbox \"6056b53cdde2b90d052fb61800f15955af34228e447aeaed4134a944df638f86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.980749 containerd[2002]: time="2026-04-17T00:22:03.980250431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdb595876-j56j2,Uid:5c67ec73-7d3f-4924-974e-10ac71826e12,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6056b53cdde2b90d052fb61800f15955af34228e447aeaed4134a944df638f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.983041 containerd[2002]: time="2026-04-17T00:22:03.983002484Z" level=error msg="Failed to destroy network for sandbox \"587cf8ab60a630a85acbcf41fe93deb01c95a2b773076b29581427a14e9b8a2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.983451 kubelet[3317]: E0417 00:22:03.983403 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6056b53cdde2b90d052fb61800f15955af34228e447aeaed4134a944df638f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.983548 kubelet[3317]: E0417 00:22:03.983495 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6056b53cdde2b90d052fb61800f15955af34228e447aeaed4134a944df638f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cdb595876-j56j2" Apr 17 00:22:03.983548 kubelet[3317]: E0417 00:22:03.983522 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6056b53cdde2b90d052fb61800f15955af34228e447aeaed4134a944df638f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cdb595876-j56j2" Apr 17 00:22:03.983819 kubelet[3317]: E0417 00:22:03.983645 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cdb595876-j56j2_calico-system(5c67ec73-7d3f-4924-974e-10ac71826e12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cdb595876-j56j2_calico-system(5c67ec73-7d3f-4924-974e-10ac71826e12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6056b53cdde2b90d052fb61800f15955af34228e447aeaed4134a944df638f86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cdb595876-j56j2" podUID="5c67ec73-7d3f-4924-974e-10ac71826e12" Apr 17 00:22:03.986941 containerd[2002]: time="2026-04-17T00:22:03.986887180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krz7c,Uid:faf227a2-c41a-476d-ac2e-763e2502ebdb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"587cf8ab60a630a85acbcf41fe93deb01c95a2b773076b29581427a14e9b8a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.987510 kubelet[3317]: E0417 00:22:03.987305 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587cf8ab60a630a85acbcf41fe93deb01c95a2b773076b29581427a14e9b8a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:03.987510 kubelet[3317]: E0417 00:22:03.987363 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587cf8ab60a630a85acbcf41fe93deb01c95a2b773076b29581427a14e9b8a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-krz7c" Apr 17 00:22:03.987510 kubelet[3317]: E0417 00:22:03.987391 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587cf8ab60a630a85acbcf41fe93deb01c95a2b773076b29581427a14e9b8a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-krz7c" Apr 17 00:22:03.987710 kubelet[3317]: E0417 00:22:03.987463 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-krz7c_kube-system(faf227a2-c41a-476d-ac2e-763e2502ebdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-krz7c_kube-system(faf227a2-c41a-476d-ac2e-763e2502ebdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"587cf8ab60a630a85acbcf41fe93deb01c95a2b773076b29581427a14e9b8a2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-krz7c" podUID="faf227a2-c41a-476d-ac2e-763e2502ebdb" Apr 17 00:22:04.013717 containerd[2002]: time="2026-04-17T00:22:04.013668162Z" level=error msg="Failed to destroy network for sandbox \"e4e5834ccc730f6f76913200ae4fb2f0d503d6f4e8dc8cdb330b3c5b6e0d8657\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:04.015657 containerd[2002]: time="2026-04-17T00:22:04.015572362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvsrn,Uid:6e4c9621-343d-439f-bfb3-71c69fe08c37,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e5834ccc730f6f76913200ae4fb2f0d503d6f4e8dc8cdb330b3c5b6e0d8657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:04.016081 systemd[1]: Started cri-containerd-e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f.scope - libcontainer container e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f. Apr 17 00:22:04.021430 kubelet[3317]: E0417 00:22:04.021368 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e5834ccc730f6f76913200ae4fb2f0d503d6f4e8dc8cdb330b3c5b6e0d8657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:04.021565 kubelet[3317]: E0417 00:22:04.021439 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e5834ccc730f6f76913200ae4fb2f0d503d6f4e8dc8cdb330b3c5b6e0d8657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:22:04.021565 kubelet[3317]: E0417 00:22:04.021467 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e5834ccc730f6f76913200ae4fb2f0d503d6f4e8dc8cdb330b3c5b6e0d8657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvsrn" Apr 17 00:22:04.021565 kubelet[3317]: E0417 00:22:04.021532 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bvsrn_calico-system(6e4c9621-343d-439f-bfb3-71c69fe08c37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bvsrn_calico-system(6e4c9621-343d-439f-bfb3-71c69fe08c37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4e5834ccc730f6f76913200ae4fb2f0d503d6f4e8dc8cdb330b3c5b6e0d8657\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bvsrn" podUID="6e4c9621-343d-439f-bfb3-71c69fe08c37" Apr 17 00:22:04.032116 containerd[2002]: time="2026-04-17T00:22:04.032064039Z" level=error msg="Failed to destroy network for sandbox \"f465da5515f0b092bc75f7c949a127f2cfb45d460d42b4d6ed1ec286c16dd122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:04.033687 containerd[2002]: time="2026-04-17T00:22:04.033638651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dr688,Uid:bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f465da5515f0b092bc75f7c949a127f2cfb45d460d42b4d6ed1ec286c16dd122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:04.034665 kubelet[3317]: E0417 00:22:04.033981 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f465da5515f0b092bc75f7c949a127f2cfb45d460d42b4d6ed1ec286c16dd122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:04.034665 kubelet[3317]: E0417 00:22:04.034046 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f465da5515f0b092bc75f7c949a127f2cfb45d460d42b4d6ed1ec286c16dd122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dr688" Apr 17 00:22:04.034665 kubelet[3317]: E0417 00:22:04.034077 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f465da5515f0b092bc75f7c949a127f2cfb45d460d42b4d6ed1ec286c16dd122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dr688" Apr 17 00:22:04.035558 kubelet[3317]: E0417 00:22:04.034141 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dr688_kube-system(bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dr688_kube-system(bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f465da5515f0b092bc75f7c949a127f2cfb45d460d42b4d6ed1ec286c16dd122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dr688" podUID="bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc" Apr 17 00:22:04.121533 containerd[2002]: time="2026-04-17T00:22:04.121127888Z" level=info msg="StartContainer for \"e25a1d4d3ae79ca2e0082433323e41983d72adcd9fb4a5c1a614516885d0773f\" returns successfully" Apr 17 00:22:04.184248 kubelet[3317]: E0417 00:22:04.184210 3317 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Apr 17 00:22:04.184575 kubelet[3317]: E0417 00:22:04.184543 3317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-config podName:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb nodeName:}" failed. No retries permitted until 2026-04-17 00:22:04.684504 +0000 UTC m=+48.317190822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-config") pod "goldmane-cccfbd5cf-8djnz" (UID: "cc519b59-dfc2-4b7e-ba52-f6ef50a332cb") : failed to sync configmap cache: timed out waiting for the condition Apr 17 00:22:04.185941 kubelet[3317]: E0417 00:22:04.185859 3317 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 17 00:22:04.186221 kubelet[3317]: E0417 00:22:04.186206 3317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f68e2cd-6388-416b-9cb5-5cf309947192-calico-apiserver-certs podName:2f68e2cd-6388-416b-9cb5-5cf309947192 nodeName:}" failed. No retries permitted until 2026-04-17 00:22:04.68605838 +0000 UTC m=+48.318745225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2f68e2cd-6388-416b-9cb5-5cf309947192-calico-apiserver-certs") pod "calico-apiserver-878d7484f-md97m" (UID: "2f68e2cd-6388-416b-9cb5-5cf309947192") : failed to sync secret cache: timed out waiting for the condition Apr 17 00:22:04.186552 kubelet[3317]: E0417 00:22:04.186392 3317 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Apr 17 00:22:04.186616 kubelet[3317]: E0417 00:22:04.186598 3317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-goldmane-key-pair podName:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb nodeName:}" failed. No retries permitted until 2026-04-17 00:22:04.686581516 +0000 UTC m=+48.319268350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-goldmane-key-pair") pod "goldmane-cccfbd5cf-8djnz" (UID: "cc519b59-dfc2-4b7e-ba52-f6ef50a332cb") : failed to sync secret cache: timed out waiting for the condition Apr 17 00:22:04.187638 kubelet[3317]: E0417 00:22:04.187508 3317 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 17 00:22:04.187638 kubelet[3317]: E0417 00:22:04.187577 3317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c4701d9-9047-4448-a009-ce8fbc675f90-calico-apiserver-certs podName:4c4701d9-9047-4448-a009-ce8fbc675f90 nodeName:}" failed. No retries permitted until 2026-04-17 00:22:04.687562916 +0000 UTC m=+48.320249739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4c4701d9-9047-4448-a009-ce8fbc675f90-calico-apiserver-certs") pod "calico-apiserver-878d7484f-ngljs" (UID: "4c4701d9-9047-4448-a009-ce8fbc675f90") : failed to sync secret cache: timed out waiting for the condition Apr 17 00:22:04.187638 kubelet[3317]: E0417 00:22:04.187511 3317 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 17 00:22:04.187638 kubelet[3317]: E0417 00:22:04.187621 3317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-goldmane-ca-bundle podName:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb nodeName:}" failed. No retries permitted until 2026-04-17 00:22:04.687608681 +0000 UTC m=+48.320295506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/cc519b59-dfc2-4b7e-ba52-f6ef50a332cb-goldmane-ca-bundle") pod "goldmane-cccfbd5cf-8djnz" (UID: "cc519b59-dfc2-4b7e-ba52-f6ef50a332cb") : failed to sync configmap cache: timed out waiting for the condition Apr 17 00:22:04.844863 kubelet[3317]: I0417 00:22:04.844441 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dbqlh" podStartSLOduration=4.642911671 podStartE2EDuration="25.84441713s" podCreationTimestamp="2026-04-17 00:21:39 +0000 UTC" firstStartedPulling="2026-04-17 00:21:40.158502121 +0000 UTC m=+23.791188944" lastFinishedPulling="2026-04-17 00:22:01.360007572 +0000 UTC m=+44.992694403" observedRunningTime="2026-04-17 00:22:04.842700031 +0000 UTC m=+48.475386877" watchObservedRunningTime="2026-04-17 00:22:04.84441713 +0000 UTC m=+48.477103974" Apr 17 00:22:04.851788 containerd[2002]: time="2026-04-17T00:22:04.851704629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-ngljs,Uid:4c4701d9-9047-4448-a009-ce8fbc675f90,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:04.866442 containerd[2002]: time="2026-04-17T00:22:04.866351321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-md97m,Uid:2f68e2cd-6388-416b-9cb5-5cf309947192,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:04.888838 containerd[2002]: time="2026-04-17T00:22:04.888652037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8djnz,Uid:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:04.892412 systemd[1]: run-netns-cni\x2d4600beab\x2dc3cb\x2ded2d\x2d9df7\x2d446ee2359521.mount: Deactivated successfully. Apr 17 00:22:04.892555 systemd[1]: run-netns-cni\x2d3792fd3a\x2dd953\x2d064b\x2de75d\x2d97f587d9c412.mount: Deactivated successfully. Apr 17 00:22:04.892633 systemd[1]: run-netns-cni\x2d35ff2102\x2d6d5e\x2dc8bd\x2d1787\x2d15a961620da9.mount: Deactivated successfully. Apr 17 00:22:04.892709 systemd[1]: run-netns-cni\x2d12a7b159\x2d59e6\x2d3ce3\x2d1f79\x2dbbdea17b7542.mount: Deactivated successfully. Apr 17 00:22:05.161289 containerd[2002]: time="2026-04-17T00:22:05.158244262Z" level=error msg="Failed to destroy network for sandbox \"e93024f1aa4147e2775c5b192b87d8b3ad7c1408ce7b5d7bcc883e3d1d1b3dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.165045 containerd[2002]: time="2026-04-17T00:22:05.164837774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8djnz,Uid:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93024f1aa4147e2775c5b192b87d8b3ad7c1408ce7b5d7bcc883e3d1d1b3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.165916 kubelet[3317]: E0417 00:22:05.165142 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93024f1aa4147e2775c5b192b87d8b3ad7c1408ce7b5d7bcc883e3d1d1b3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.165916 kubelet[3317]: E0417 00:22:05.165204 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93024f1aa4147e2775c5b192b87d8b3ad7c1408ce7b5d7bcc883e3d1d1b3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8djnz" Apr 17 00:22:05.165916 kubelet[3317]: E0417 00:22:05.165236 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93024f1aa4147e2775c5b192b87d8b3ad7c1408ce7b5d7bcc883e3d1d1b3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8djnz" Apr 17 00:22:05.165633 systemd[1]: run-netns-cni\x2d073d130a\x2d44b4\x2ded9c\x2d648a\x2d7545e77ebf2e.mount: Deactivated successfully. Apr 17 00:22:05.167513 kubelet[3317]: E0417 00:22:05.165307 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-8djnz_calico-system(cc519b59-dfc2-4b7e-ba52-f6ef50a332cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-8djnz_calico-system(cc519b59-dfc2-4b7e-ba52-f6ef50a332cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e93024f1aa4147e2775c5b192b87d8b3ad7c1408ce7b5d7bcc883e3d1d1b3dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8djnz" podUID="cc519b59-dfc2-4b7e-ba52-f6ef50a332cb" Apr 17 00:22:05.221997 containerd[2002]: time="2026-04-17T00:22:05.221698108Z" level=error msg="Failed to destroy network for sandbox \"4b461b4767ec2111cc8ad483c99b6ca1ac44785402a53a84c0518add1955b191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.227267 containerd[2002]: time="2026-04-17T00:22:05.227221251Z" level=error msg="Failed to destroy network for sandbox \"d3a1c85f0a8af99ed7f3cdc22061ff00cc544660b1f4d0822d9083e4c004bf30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.231596 systemd[1]: run-netns-cni\x2d95cdf5d7\x2d2fe9\x2db8b2\x2d806b\x2d4546828ab9b4.mount: Deactivated successfully. Apr 17 00:22:05.234242 containerd[2002]: time="2026-04-17T00:22:05.233321031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-md97m,Uid:2f68e2cd-6388-416b-9cb5-5cf309947192,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b461b4767ec2111cc8ad483c99b6ca1ac44785402a53a84c0518add1955b191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.234488 kubelet[3317]: E0417 00:22:05.233579 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b461b4767ec2111cc8ad483c99b6ca1ac44785402a53a84c0518add1955b191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.234488 kubelet[3317]: E0417 00:22:05.233644 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b461b4767ec2111cc8ad483c99b6ca1ac44785402a53a84c0518add1955b191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-878d7484f-md97m" Apr 17 00:22:05.234488 kubelet[3317]: E0417 00:22:05.233670 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b461b4767ec2111cc8ad483c99b6ca1ac44785402a53a84c0518add1955b191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-878d7484f-md97m" Apr 17 00:22:05.234913 kubelet[3317]: E0417 00:22:05.234551 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-878d7484f-md97m_calico-system(2f68e2cd-6388-416b-9cb5-5cf309947192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-878d7484f-md97m_calico-system(2f68e2cd-6388-416b-9cb5-5cf309947192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b461b4767ec2111cc8ad483c99b6ca1ac44785402a53a84c0518add1955b191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-878d7484f-md97m" podUID="2f68e2cd-6388-416b-9cb5-5cf309947192" Apr 17 00:22:05.235556 containerd[2002]: time="2026-04-17T00:22:05.235060810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-ngljs,Uid:4c4701d9-9047-4448-a009-ce8fbc675f90,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3a1c85f0a8af99ed7f3cdc22061ff00cc544660b1f4d0822d9083e4c004bf30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.236486 kubelet[3317]: E0417 00:22:05.236207 3317 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3a1c85f0a8af99ed7f3cdc22061ff00cc544660b1f4d0822d9083e4c004bf30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:22:05.236653 kubelet[3317]: E0417 00:22:05.236378 3317 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3a1c85f0a8af99ed7f3cdc22061ff00cc544660b1f4d0822d9083e4c004bf30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-878d7484f-ngljs" Apr 17 00:22:05.236985 kubelet[3317]: E0417 00:22:05.236637 3317 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3a1c85f0a8af99ed7f3cdc22061ff00cc544660b1f4d0822d9083e4c004bf30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-878d7484f-ngljs" Apr 17 00:22:05.237237 kubelet[3317]: E0417 00:22:05.237182 3317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-878d7484f-ngljs_calico-system(4c4701d9-9047-4448-a009-ce8fbc675f90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-878d7484f-ngljs_calico-system(4c4701d9-9047-4448-a009-ce8fbc675f90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3a1c85f0a8af99ed7f3cdc22061ff00cc544660b1f4d0822d9083e4c004bf30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-878d7484f-ngljs" podUID="4c4701d9-9047-4448-a009-ce8fbc675f90" Apr 17 00:22:05.306521 kubelet[3317]: I0417 00:22:05.305910 3317 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-backend-key-pair\") pod \"ed174bd2-a458-4cc7-9616-028f48dff565\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " Apr 17 00:22:05.306521 kubelet[3317]: I0417 00:22:05.305968 3317 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-nginx-config\") pod \"ed174bd2-a458-4cc7-9616-028f48dff565\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " Apr 17 00:22:05.306521 kubelet[3317]: I0417 00:22:05.306002 3317 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-ca-bundle\") pod \"ed174bd2-a458-4cc7-9616-028f48dff565\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " Apr 17 00:22:05.306521 kubelet[3317]: I0417 00:22:05.306057 3317 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpjdh\" (UniqueName: \"kubernetes.io/projected/ed174bd2-a458-4cc7-9616-028f48dff565-kube-api-access-kpjdh\") pod \"ed174bd2-a458-4cc7-9616-028f48dff565\" (UID: \"ed174bd2-a458-4cc7-9616-028f48dff565\") " Apr 17 00:22:05.307271 kubelet[3317]: I0417 00:22:05.307219 3317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ed174bd2-a458-4cc7-9616-028f48dff565" (UID: "ed174bd2-a458-4cc7-9616-028f48dff565"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 00:22:05.310850 kubelet[3317]: I0417 00:22:05.310796 3317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ed174bd2-a458-4cc7-9616-028f48dff565" (UID: "ed174bd2-a458-4cc7-9616-028f48dff565"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 00:22:05.314082 kubelet[3317]: I0417 00:22:05.314043 3317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed174bd2-a458-4cc7-9616-028f48dff565-kube-api-access-kpjdh" (OuterVolumeSpecName: "kube-api-access-kpjdh") pod "ed174bd2-a458-4cc7-9616-028f48dff565" (UID: "ed174bd2-a458-4cc7-9616-028f48dff565"). InnerVolumeSpecName "kube-api-access-kpjdh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 00:22:05.315324 kubelet[3317]: I0417 00:22:05.315287 3317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ed174bd2-a458-4cc7-9616-028f48dff565" (UID: "ed174bd2-a458-4cc7-9616-028f48dff565"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 00:22:05.407187 kubelet[3317]: I0417 00:22:05.407144 3317 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kpjdh\" (UniqueName: \"kubernetes.io/projected/ed174bd2-a458-4cc7-9616-028f48dff565-kube-api-access-kpjdh\") on node \"ip-172-31-17-163\" DevicePath \"\"" Apr 17 00:22:05.407187 kubelet[3317]: I0417 00:22:05.407177 3317 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-backend-key-pair\") on node \"ip-172-31-17-163\" DevicePath \"\"" Apr 17 00:22:05.407187 kubelet[3317]: I0417 00:22:05.407190 3317 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-nginx-config\") on node \"ip-172-31-17-163\" DevicePath \"\"" Apr 17 00:22:05.407408 kubelet[3317]: I0417 00:22:05.407202 3317 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed174bd2-a458-4cc7-9616-028f48dff565-whisker-ca-bundle\") on node \"ip-172-31-17-163\" DevicePath \"\"" Apr 17 00:22:05.822919 systemd[1]: Removed slice kubepods-besteffort-poded174bd2_a458_4cc7_9616_028f48dff565.slice - libcontainer container kubepods-besteffort-poded174bd2_a458_4cc7_9616_028f48dff565.slice. Apr 17 00:22:05.868274 systemd[1]: run-netns-cni\x2df8ab072a\x2d35b8\x2d7d69\x2de5cd\x2d1ae94b9fb4ce.mount: Deactivated successfully. Apr 17 00:22:05.868922 systemd[1]: var-lib-kubelet-pods-ed174bd2\x2da458\x2d4cc7\x2d9616\x2d028f48dff565-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 00:22:05.869184 systemd[1]: var-lib-kubelet-pods-ed174bd2\x2da458\x2d4cc7\x2d9616\x2d028f48dff565-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkpjdh.mount: Deactivated successfully. Apr 17 00:22:05.962770 systemd[1]: Created slice kubepods-besteffort-pode4984b37_e015_4b7e_89e1_4d74cc1f9a4a.slice - libcontainer container kubepods-besteffort-pode4984b37_e015_4b7e_89e1_4d74cc1f9a4a.slice. Apr 17 00:22:06.011011 kubelet[3317]: I0417 00:22:06.010959 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4984b37-e015-4b7e-89e1-4d74cc1f9a4a-whisker-backend-key-pair\") pod \"whisker-6fc5d7f7dd-mtglg\" (UID: \"e4984b37-e015-4b7e-89e1-4d74cc1f9a4a\") " pod="calico-system/whisker-6fc5d7f7dd-mtglg" Apr 17 00:22:06.011179 kubelet[3317]: I0417 00:22:06.011028 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4984b37-e015-4b7e-89e1-4d74cc1f9a4a-whisker-ca-bundle\") pod \"whisker-6fc5d7f7dd-mtglg\" (UID: \"e4984b37-e015-4b7e-89e1-4d74cc1f9a4a\") " pod="calico-system/whisker-6fc5d7f7dd-mtglg" Apr 17 00:22:06.011179 kubelet[3317]: I0417 00:22:06.011056 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfb9z\" (UniqueName: \"kubernetes.io/projected/e4984b37-e015-4b7e-89e1-4d74cc1f9a4a-kube-api-access-jfb9z\") pod \"whisker-6fc5d7f7dd-mtglg\" (UID: \"e4984b37-e015-4b7e-89e1-4d74cc1f9a4a\") " pod="calico-system/whisker-6fc5d7f7dd-mtglg" Apr 17 00:22:06.011179 kubelet[3317]: I0417 00:22:06.011083 3317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e4984b37-e015-4b7e-89e1-4d74cc1f9a4a-nginx-config\") pod \"whisker-6fc5d7f7dd-mtglg\" (UID: \"e4984b37-e015-4b7e-89e1-4d74cc1f9a4a\") " pod="calico-system/whisker-6fc5d7f7dd-mtglg" Apr 17 00:22:06.271281 containerd[2002]: time="2026-04-17T00:22:06.271237764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc5d7f7dd-mtglg,Uid:e4984b37-e015-4b7e-89e1-4d74cc1f9a4a,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:06.522549 systemd-networkd[1780]: calib8a87fd4719: Link UP Apr 17 00:22:06.523439 systemd-networkd[1780]: calib8a87fd4719: Gained carrier Apr 17 00:22:06.543282 (udev-worker)[4701]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:22:06.567395 kubelet[3317]: I0417 00:22:06.567350 3317 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed174bd2-a458-4cc7-9616-028f48dff565" path="/var/lib/kubelet/pods/ed174bd2-a458-4cc7-9616-028f48dff565/volumes" Apr 17 00:22:06.573759 containerd[2002]: 2026-04-17 00:22:06.303 [ERROR][4672] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:22:06.573759 containerd[2002]: 2026-04-17 00:22:06.374 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0 whisker-6fc5d7f7dd- calico-system e4984b37-e015-4b7e-89e1-4d74cc1f9a4a 950 0 2026-04-17 00:22:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fc5d7f7dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-163 whisker-6fc5d7f7dd-mtglg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib8a87fd4719 [] [] }} ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-" Apr 17 00:22:06.573759 containerd[2002]: 2026-04-17 00:22:06.374 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.573759 containerd[2002]: 2026-04-17 00:22:06.430 [INFO][4682] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" HandleID="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Workload="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.445 [INFO][4682] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" HandleID="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Workload="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-163", "pod":"whisker-6fc5d7f7dd-mtglg", "timestamp":"2026-04-17 00:22:06.430579725 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b5600)} Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.445 [INFO][4682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.446 [INFO][4682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.446 [INFO][4682] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.448 [INFO][4682] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" host="ip-172-31-17-163" Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.454 [INFO][4682] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.459 [INFO][4682] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.462 [INFO][4682] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:06.574094 containerd[2002]: 2026-04-17 00:22:06.464 [INFO][4682] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.464 [INFO][4682] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" host="ip-172-31-17-163" Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.466 [INFO][4682] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516 Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.471 [INFO][4682] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" host="ip-172-31-17-163" Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.479 [INFO][4682] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.129/26] block=192.168.122.128/26 handle="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" host="ip-172-31-17-163" Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.479 [INFO][4682] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.129/26] handle="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" host="ip-172-31-17-163" Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.480 [INFO][4682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:06.574484 containerd[2002]: 2026-04-17 00:22:06.480 [INFO][4682] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.129/26] IPv6=[] ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" HandleID="k8s-pod-network.99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Workload="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.575833 containerd[2002]: 2026-04-17 00:22:06.483 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0", GenerateName:"whisker-6fc5d7f7dd-", Namespace:"calico-system", SelfLink:"", UID:"e4984b37-e015-4b7e-89e1-4d74cc1f9a4a", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 22, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fc5d7f7dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"whisker-6fc5d7f7dd-mtglg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib8a87fd4719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:06.575833 containerd[2002]: 2026-04-17 00:22:06.484 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.129/32] ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.576030 containerd[2002]: 2026-04-17 00:22:06.484 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8a87fd4719 ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.576030 containerd[2002]: 2026-04-17 00:22:06.524 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.576132 containerd[2002]: 2026-04-17 00:22:06.527 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0", GenerateName:"whisker-6fc5d7f7dd-", Namespace:"calico-system", SelfLink:"", UID:"e4984b37-e015-4b7e-89e1-4d74cc1f9a4a", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 22, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fc5d7f7dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516", Pod:"whisker-6fc5d7f7dd-mtglg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib8a87fd4719", MAC:"2e:71:53:db:db:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:06.576238 containerd[2002]: 2026-04-17 00:22:06.554 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" Namespace="calico-system" Pod="whisker-6fc5d7f7dd-mtglg" WorkloadEndpoint="ip--172--31--17--163-k8s-whisker--6fc5d7f7dd--mtglg-eth0" Apr 17 00:22:06.848821 containerd[2002]: time="2026-04-17T00:22:06.847303759Z" level=info msg="connecting to shim 99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516" address="unix:///run/containerd/s/95bfd89e368319c2602e3d59c4a13461e8f76055cc7f2a587c984f3a091908f0" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:06.951638 systemd[1]: Started cri-containerd-99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516.scope - libcontainer container 99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516. Apr 17 00:22:07.137253 containerd[2002]: time="2026-04-17T00:22:07.136842120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc5d7f7dd-mtglg,Uid:e4984b37-e015-4b7e-89e1-4d74cc1f9a4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516\"" Apr 17 00:22:07.145211 containerd[2002]: time="2026-04-17T00:22:07.145173678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 00:22:08.128560 systemd-networkd[1780]: calib8a87fd4719: Gained IPv6LL Apr 17 00:22:08.660532 systemd-networkd[1780]: vxlan.calico: Link UP Apr 17 00:22:08.661556 systemd-networkd[1780]: vxlan.calico: Gained carrier Apr 17 00:22:08.718964 (udev-worker)[4700]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:22:08.722270 (udev-worker)[4902]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:22:09.505034 containerd[2002]: time="2026-04-17T00:22:09.504983451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:09.505618 containerd[2002]: time="2026-04-17T00:22:09.505535846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 00:22:09.507568 containerd[2002]: time="2026-04-17T00:22:09.506764971Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:09.511804 containerd[2002]: time="2026-04-17T00:22:09.511760096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:09.512538 containerd[2002]: time="2026-04-17T00:22:09.512504219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.367275527s" Apr 17 00:22:09.512665 containerd[2002]: time="2026-04-17T00:22:09.512646160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 00:22:09.537405 containerd[2002]: time="2026-04-17T00:22:09.537351639Z" level=info msg="CreateContainer within sandbox \"99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 00:22:09.556105 containerd[2002]: time="2026-04-17T00:22:09.555969580Z" level=info msg="Container 68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:09.565219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222976762.mount: Deactivated successfully. Apr 17 00:22:09.592923 containerd[2002]: time="2026-04-17T00:22:09.592876055Z" level=info msg="CreateContainer within sandbox \"99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6\"" Apr 17 00:22:09.593634 containerd[2002]: time="2026-04-17T00:22:09.593598240Z" level=info msg="StartContainer for \"68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6\"" Apr 17 00:22:09.605827 containerd[2002]: time="2026-04-17T00:22:09.605745900Z" level=info msg="connecting to shim 68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6" address="unix:///run/containerd/s/95bfd89e368319c2602e3d59c4a13461e8f76055cc7f2a587c984f3a091908f0" protocol=ttrpc version=3 Apr 17 00:22:09.659924 systemd[1]: Started cri-containerd-68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6.scope - libcontainer container 68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6. Apr 17 00:22:09.735855 containerd[2002]: time="2026-04-17T00:22:09.735651272Z" level=info msg="StartContainer for \"68be531d309687b3bc4d534d188196b732df63c88062bb28c3bbda1e323f3fe6\" returns successfully" Apr 17 00:22:09.742673 containerd[2002]: time="2026-04-17T00:22:09.742626008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 00:22:10.483363 systemd-networkd[1780]: vxlan.calico: Gained IPv6LL Apr 17 00:22:11.647120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716931147.mount: Deactivated successfully. Apr 17 00:22:11.673462 containerd[2002]: time="2026-04-17T00:22:11.673409873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:11.674917 containerd[2002]: time="2026-04-17T00:22:11.674851216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 00:22:11.676605 containerd[2002]: time="2026-04-17T00:22:11.676547101Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:11.680179 containerd[2002]: time="2026-04-17T00:22:11.679385775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:11.680179 containerd[2002]: time="2026-04-17T00:22:11.680049634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.936908902s" Apr 17 00:22:11.680179 containerd[2002]: time="2026-04-17T00:22:11.680083735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 00:22:11.685404 containerd[2002]: time="2026-04-17T00:22:11.685277718Z" level=info msg="CreateContainer within sandbox \"99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 00:22:11.696763 containerd[2002]: time="2026-04-17T00:22:11.695939386Z" level=info msg="Container c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:11.704964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314850042.mount: Deactivated successfully. Apr 17 00:22:11.711294 containerd[2002]: time="2026-04-17T00:22:11.711253965Z" level=info msg="CreateContainer within sandbox \"99a09b89510ae63b2cecbe6effb3e1b2b0717bba2fe9f0d299948fb779fe5516\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a\"" Apr 17 00:22:11.711941 containerd[2002]: time="2026-04-17T00:22:11.711910104Z" level=info msg="StartContainer for \"c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a\"" Apr 17 00:22:11.713162 containerd[2002]: time="2026-04-17T00:22:11.713129871Z" level=info msg="connecting to shim c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a" address="unix:///run/containerd/s/95bfd89e368319c2602e3d59c4a13461e8f76055cc7f2a587c984f3a091908f0" protocol=ttrpc version=3 Apr 17 00:22:11.744948 systemd[1]: Started cri-containerd-c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a.scope - libcontainer container c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a. Apr 17 00:22:11.801901 containerd[2002]: time="2026-04-17T00:22:11.801851728Z" level=info msg="StartContainer for \"c1d7bf6717359358ea3782d3afedcdb3a2e65970f1bae9e2a68353bd3c8d467a\" returns successfully" Apr 17 00:22:12.510540 kubelet[3317]: I0417 00:22:12.507300 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6fc5d7f7dd-mtglg" podStartSLOduration=2.959466184 podStartE2EDuration="7.496583349s" podCreationTimestamp="2026-04-17 00:22:05 +0000 UTC" firstStartedPulling="2026-04-17 00:22:07.144052994 +0000 UTC m=+50.776739832" lastFinishedPulling="2026-04-17 00:22:11.681170175 +0000 UTC m=+55.313856997" observedRunningTime="2026-04-17 00:22:12.487026057 +0000 UTC m=+56.119712911" watchObservedRunningTime="2026-04-17 00:22:12.496583349 +0000 UTC m=+56.129270193" Apr 17 00:22:13.406912 ntpd[2136]: Listen normally on 6 vxlan.calico 192.168.122.128:123 Apr 17 00:22:13.406982 ntpd[2136]: Listen normally on 7 calib8a87fd4719 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 00:22:13.409090 ntpd[2136]: 17 Apr 00:22:13 ntpd[2136]: Listen normally on 6 vxlan.calico 192.168.122.128:123 Apr 17 00:22:13.409090 ntpd[2136]: 17 Apr 00:22:13 ntpd[2136]: Listen normally on 7 calib8a87fd4719 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 00:22:13.409090 ntpd[2136]: 17 Apr 00:22:13 ntpd[2136]: Listen normally on 8 vxlan.calico [fe80::6423:f4ff:fef8:723e%5]:123 Apr 17 00:22:13.407014 ntpd[2136]: Listen normally on 8 vxlan.calico [fe80::6423:f4ff:fef8:723e%5]:123 Apr 17 00:22:15.554642 containerd[2002]: time="2026-04-17T00:22:15.554588136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dr688,Uid:bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc,Namespace:kube-system,Attempt:0,}" Apr 17 00:22:15.952106 systemd-networkd[1780]: calib3d96c58243: Link UP Apr 17 00:22:15.955014 systemd-networkd[1780]: calib3d96c58243: Gained carrier Apr 17 00:22:15.965173 (udev-worker)[5077]: Network interface NamePolicy= disabled on kernel command line. Apr 17 00:22:15.990847 containerd[2002]: 2026-04-17 00:22:15.693 [INFO][5059] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0 coredns-66bc5c9577- kube-system bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc 880 0 2026-04-17 00:21:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-163 coredns-66bc5c9577-dr688 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib3d96c58243 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-" Apr 17 00:22:15.990847 containerd[2002]: 2026-04-17 00:22:15.694 [INFO][5059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:15.990847 containerd[2002]: 2026-04-17 00:22:15.897 [INFO][5070] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" HandleID="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Workload="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.911 [INFO][5070] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" HandleID="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Workload="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005ec350), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-163", "pod":"coredns-66bc5c9577-dr688", "timestamp":"2026-04-17 00:22:15.897989269 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ff600)} Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.912 [INFO][5070] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.912 [INFO][5070] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.912 [INFO][5070] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.916 [INFO][5070] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" host="ip-172-31-17-163" Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.923 [INFO][5070] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.928 [INFO][5070] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.930 [INFO][5070] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:15.991186 containerd[2002]: 2026-04-17 00:22:15.932 [INFO][5070] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.932 [INFO][5070] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" host="ip-172-31-17-163" Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.934 [INFO][5070] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2 Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.938 [INFO][5070] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" host="ip-172-31-17-163" Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.946 [INFO][5070] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.130/26] block=192.168.122.128/26 handle="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" host="ip-172-31-17-163" Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.946 [INFO][5070] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.130/26] handle="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" host="ip-172-31-17-163" Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.946 [INFO][5070] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:15.992098 containerd[2002]: 2026-04-17 00:22:15.946 [INFO][5070] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.130/26] IPv6=[] ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" HandleID="k8s-pod-network.abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Workload="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:15.993049 containerd[2002]: 2026-04-17 00:22:15.949 [INFO][5059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"coredns-66bc5c9577-dr688", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3d96c58243", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:15.993049 containerd[2002]: 2026-04-17 00:22:15.949 [INFO][5059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.130/32] ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:15.993049 containerd[2002]: 2026-04-17 00:22:15.949 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3d96c58243 ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:15.993049 containerd[2002]: 2026-04-17 00:22:15.953 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:15.993049 containerd[2002]: 2026-04-17 00:22:15.955 [INFO][5059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2", Pod:"coredns-66bc5c9577-dr688", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3d96c58243", MAC:"26:5e:1d:16:70:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:15.993049 containerd[2002]: 2026-04-17 00:22:15.979 [INFO][5059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" Namespace="kube-system" Pod="coredns-66bc5c9577-dr688" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--dr688-eth0" Apr 17 00:22:16.086058 containerd[2002]: time="2026-04-17T00:22:16.084952954Z" level=info msg="connecting to shim abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2" address="unix:///run/containerd/s/bd78832647f4cb1f9d34361083fa1173a641e68c754c40be1cc5e1f2db3f76f3" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:16.128004 systemd[1]: Started cri-containerd-abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2.scope - libcontainer container abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2. Apr 17 00:22:16.185121 containerd[2002]: time="2026-04-17T00:22:16.185017246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dr688,Uid:bcf739fd-30d6-4de6-aa6a-a1d7e5ed1cfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2\"" Apr 17 00:22:16.192391 containerd[2002]: time="2026-04-17T00:22:16.192123043Z" level=info msg="CreateContainer within sandbox \"abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 00:22:16.227336 containerd[2002]: time="2026-04-17T00:22:16.227229798Z" level=info msg="Container 10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:16.228568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506666319.mount: Deactivated successfully. Apr 17 00:22:16.237475 containerd[2002]: time="2026-04-17T00:22:16.237413222Z" level=info msg="CreateContainer within sandbox \"abb7790e0ae13f503d1a967d4ef7bb9762e9625e5fb72480e9410bc2d1f95ae2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8\"" Apr 17 00:22:16.238260 containerd[2002]: time="2026-04-17T00:22:16.238224984Z" level=info msg="StartContainer for \"10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8\"" Apr 17 00:22:16.239283 containerd[2002]: time="2026-04-17T00:22:16.239243125Z" level=info msg="connecting to shim 10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8" address="unix:///run/containerd/s/bd78832647f4cb1f9d34361083fa1173a641e68c754c40be1cc5e1f2db3f76f3" protocol=ttrpc version=3 Apr 17 00:22:16.258937 systemd[1]: Started cri-containerd-10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8.scope - libcontainer container 10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8. Apr 17 00:22:16.295709 containerd[2002]: time="2026-04-17T00:22:16.295673783Z" level=info msg="StartContainer for \"10426745ef8d8bf119f6916fe1999365cf00806df4c242753d22e0f7dcfb87e8\" returns successfully" Apr 17 00:22:16.555211 containerd[2002]: time="2026-04-17T00:22:16.555114029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdb595876-j56j2,Uid:5c67ec73-7d3f-4924-974e-10ac71826e12,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:16.735221 systemd-networkd[1780]: cali716abefe93e: Link UP Apr 17 00:22:16.736197 systemd-networkd[1780]: cali716abefe93e: Gained carrier Apr 17 00:22:16.756247 kubelet[3317]: I0417 00:22:16.756173 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dr688" podStartSLOduration=54.756149693 podStartE2EDuration="54.756149693s" podCreationTimestamp="2026-04-17 00:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:22:16.483783678 +0000 UTC m=+60.116470532" watchObservedRunningTime="2026-04-17 00:22:16.756149693 +0000 UTC m=+60.388836527" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.623 [INFO][5178] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0 calico-kube-controllers-7cdb595876- calico-system 5c67ec73-7d3f-4924-974e-10ac71826e12 889 0 2026-04-17 00:21:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cdb595876 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-163 calico-kube-controllers-7cdb595876-j56j2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali716abefe93e [] [] }} ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.624 [INFO][5178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.680 [INFO][5192] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" HandleID="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Workload="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.688 [INFO][5192] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" HandleID="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Workload="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdf70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-163", "pod":"calico-kube-controllers-7cdb595876-j56j2", "timestamp":"2026-04-17 00:22:16.680124281 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c6000)} Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.688 [INFO][5192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.688 [INFO][5192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.688 [INFO][5192] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.691 [INFO][5192] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.696 [INFO][5192] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.702 [INFO][5192] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.704 [INFO][5192] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.706 [INFO][5192] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.706 [INFO][5192] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.708 [INFO][5192] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.714 [INFO][5192] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.723 [INFO][5192] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.131/26] block=192.168.122.128/26 handle="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.723 [INFO][5192] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.131/26] handle="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" host="ip-172-31-17-163" Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.723 [INFO][5192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:16.765822 containerd[2002]: 2026-04-17 00:22:16.724 [INFO][5192] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.131/26] IPv6=[] ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" HandleID="k8s-pod-network.4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Workload="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.768970 containerd[2002]: 2026-04-17 00:22:16.727 [INFO][5178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0", GenerateName:"calico-kube-controllers-7cdb595876-", Namespace:"calico-system", SelfLink:"", UID:"5c67ec73-7d3f-4924-974e-10ac71826e12", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cdb595876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"calico-kube-controllers-7cdb595876-j56j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali716abefe93e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:16.768970 containerd[2002]: 2026-04-17 00:22:16.727 [INFO][5178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.131/32] ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.768970 containerd[2002]: 2026-04-17 00:22:16.727 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali716abefe93e ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.768970 containerd[2002]: 2026-04-17 00:22:16.736 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.768970 containerd[2002]: 2026-04-17 00:22:16.738 [INFO][5178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0", GenerateName:"calico-kube-controllers-7cdb595876-", Namespace:"calico-system", SelfLink:"", UID:"5c67ec73-7d3f-4924-974e-10ac71826e12", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cdb595876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f", Pod:"calico-kube-controllers-7cdb595876-j56j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali716abefe93e", MAC:"5e:d1:97:b1:03:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:16.768970 containerd[2002]: 2026-04-17 00:22:16.759 [INFO][5178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" Namespace="calico-system" Pod="calico-kube-controllers-7cdb595876-j56j2" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--kube--controllers--7cdb595876--j56j2-eth0" Apr 17 00:22:16.812030 containerd[2002]: time="2026-04-17T00:22:16.810701395Z" level=info msg="connecting to shim 4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f" address="unix:///run/containerd/s/243b2b66c032d0ee2c7a44dfa7ebe9ccb82f1966175739e686b05a7c6f6c16c2" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:16.872143 systemd[1]: Started cri-containerd-4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f.scope - libcontainer container 4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f. Apr 17 00:22:16.990933 containerd[2002]: time="2026-04-17T00:22:16.986716166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdb595876-j56j2,Uid:5c67ec73-7d3f-4924-974e-10ac71826e12,Namespace:calico-system,Attempt:0,} returns sandbox id \"4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f\"" Apr 17 00:22:16.995669 containerd[2002]: time="2026-04-17T00:22:16.995603399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 00:22:17.010966 systemd-networkd[1780]: calib3d96c58243: Gained IPv6LL Apr 17 00:22:17.553395 containerd[2002]: time="2026-04-17T00:22:17.553350875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8djnz,Uid:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:17.555910 containerd[2002]: time="2026-04-17T00:22:17.555867337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvsrn,Uid:6e4c9621-343d-439f-bfb3-71c69fe08c37,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:17.758182 systemd-networkd[1780]: calia2b05eed4c8: Link UP Apr 17 00:22:17.759703 systemd-networkd[1780]: calia2b05eed4c8: Gained carrier Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.658 [INFO][5280] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0 goldmane-cccfbd5cf- calico-system cc519b59-dfc2-4b7e-ba52-f6ef50a332cb 888 0 2026-04-17 00:21:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-163 goldmane-cccfbd5cf-8djnz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia2b05eed4c8 [] [] }} ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.658 [INFO][5280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.703 [INFO][5306] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" HandleID="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Workload="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.715 [INFO][5306] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" HandleID="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Workload="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277470), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-163", "pod":"goldmane-cccfbd5cf-8djnz", "timestamp":"2026-04-17 00:22:17.703795662 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001e31e0)} Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.715 [INFO][5306] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.716 [INFO][5306] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.716 [INFO][5306] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.719 [INFO][5306] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.724 [INFO][5306] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.728 [INFO][5306] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.731 [INFO][5306] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.733 [INFO][5306] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.733 [INFO][5306] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.735 [INFO][5306] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0 Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.742 [INFO][5306] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.751 [INFO][5306] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.132/26] block=192.168.122.128/26 handle="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.751 [INFO][5306] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.132/26] handle="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" host="ip-172-31-17-163" Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.751 [INFO][5306] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:17.779706 containerd[2002]: 2026-04-17 00:22:17.751 [INFO][5306] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.132/26] IPv6=[] ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" HandleID="k8s-pod-network.e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Workload="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.782137 containerd[2002]: 2026-04-17 00:22:17.753 [INFO][5280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"cc519b59-dfc2-4b7e-ba52-f6ef50a332cb", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"goldmane-cccfbd5cf-8djnz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2b05eed4c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:17.782137 containerd[2002]: 2026-04-17 00:22:17.754 [INFO][5280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.132/32] ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.782137 containerd[2002]: 2026-04-17 00:22:17.754 [INFO][5280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2b05eed4c8 ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.782137 containerd[2002]: 2026-04-17 00:22:17.760 [INFO][5280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.782137 containerd[2002]: 2026-04-17 00:22:17.761 [INFO][5280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"cc519b59-dfc2-4b7e-ba52-f6ef50a332cb", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0", Pod:"goldmane-cccfbd5cf-8djnz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2b05eed4c8", MAC:"1a:26:9e:2d:8a:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:17.782137 containerd[2002]: 2026-04-17 00:22:17.775 [INFO][5280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8djnz" WorkloadEndpoint="ip--172--31--17--163-k8s-goldmane--cccfbd5cf--8djnz-eth0" Apr 17 00:22:17.832277 containerd[2002]: time="2026-04-17T00:22:17.832000378Z" level=info msg="connecting to shim e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0" address="unix:///run/containerd/s/c10378c2b363fd7e36a764dfd8aa1ce133f7780d2dc39f6384b9be1440fc6c24" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:17.897006 systemd[1]: Started cri-containerd-e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0.scope - libcontainer container e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0. Apr 17 00:22:17.966106 systemd-networkd[1780]: calic9897323c4f: Link UP Apr 17 00:22:17.968131 systemd-networkd[1780]: calic9897323c4f: Gained carrier Apr 17 00:22:18.027895 containerd[2002]: time="2026-04-17T00:22:18.027846394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8djnz,Uid:cc519b59-dfc2-4b7e-ba52-f6ef50a332cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0\"" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.662 [INFO][5281] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0 csi-node-driver- calico-system 6e4c9621-343d-439f-bfb3-71c69fe08c37 737 0 2026-04-17 00:21:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-163 csi-node-driver-bvsrn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9897323c4f [] [] }} ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.663 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.703 [INFO][5311] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" HandleID="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Workload="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.718 [INFO][5311] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" HandleID="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Workload="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-163", "pod":"csi-node-driver-bvsrn", "timestamp":"2026-04-17 00:22:17.703796432 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001686e0)} Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.719 [INFO][5311] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.751 [INFO][5311] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.751 [INFO][5311] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.830 [INFO][5311] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.866 [INFO][5311] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.885 [INFO][5311] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.895 [INFO][5311] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.912 [INFO][5311] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.913 [INFO][5311] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.918 [INFO][5311] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2 Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.932 [INFO][5311] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.954 [INFO][5311] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.133/26] block=192.168.122.128/26 handle="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.954 [INFO][5311] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.133/26] handle="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" host="ip-172-31-17-163" Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.954 [INFO][5311] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:18.035198 containerd[2002]: 2026-04-17 00:22:17.955 [INFO][5311] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.133/26] IPv6=[] ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" HandleID="k8s-pod-network.669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Workload="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.037863 containerd[2002]: 2026-04-17 00:22:17.959 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e4c9621-343d-439f-bfb3-71c69fe08c37", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"csi-node-driver-bvsrn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9897323c4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:18.037863 containerd[2002]: 2026-04-17 00:22:17.959 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.133/32] ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.037863 containerd[2002]: 2026-04-17 00:22:17.959 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9897323c4f ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.037863 containerd[2002]: 2026-04-17 00:22:17.968 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.037863 containerd[2002]: 2026-04-17 00:22:17.971 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e4c9621-343d-439f-bfb3-71c69fe08c37", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2", Pod:"csi-node-driver-bvsrn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9897323c4f", MAC:"72:d3:27:08:13:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:18.037863 containerd[2002]: 2026-04-17 00:22:18.031 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" Namespace="calico-system" Pod="csi-node-driver-bvsrn" WorkloadEndpoint="ip--172--31--17--163-k8s-csi--node--driver--bvsrn-eth0" Apr 17 00:22:18.082229 containerd[2002]: time="2026-04-17T00:22:18.082066747Z" level=info msg="connecting to shim 669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2" address="unix:///run/containerd/s/0b6b83aee491faa1b4194fb149bf8e0ff8152aacd58863a04d4e9759cd60ae60" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:18.126003 systemd[1]: Started cri-containerd-669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2.scope - libcontainer container 669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2. Apr 17 00:22:18.164006 systemd-networkd[1780]: cali716abefe93e: Gained IPv6LL Apr 17 00:22:18.248511 containerd[2002]: time="2026-04-17T00:22:18.248460139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvsrn,Uid:6e4c9621-343d-439f-bfb3-71c69fe08c37,Namespace:calico-system,Attempt:0,} returns sandbox id \"669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2\"" Apr 17 00:22:18.552471 containerd[2002]: time="2026-04-17T00:22:18.552380787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-ngljs,Uid:4c4701d9-9047-4448-a009-ce8fbc675f90,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:18.701638 systemd[1]: Started sshd@7-172.31.17.163:22-50.85.169.122:52586.service - OpenSSH per-connection server daemon (50.85.169.122:52586). Apr 17 00:22:18.833485 systemd-networkd[1780]: calid9892f0336d: Link UP Apr 17 00:22:18.833795 systemd-networkd[1780]: calid9892f0336d: Gained carrier Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.634 [INFO][5451] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0 calico-apiserver-878d7484f- calico-system 4c4701d9-9047-4448-a009-ce8fbc675f90 890 0 2026-04-17 00:21:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:878d7484f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-163 calico-apiserver-878d7484f-ngljs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid9892f0336d [] [] }} ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.634 [INFO][5451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.725 [INFO][5465] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" HandleID="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Workload="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.740 [INFO][5465] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" HandleID="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Workload="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027a130), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-163", "pod":"calico-apiserver-878d7484f-ngljs", "timestamp":"2026-04-17 00:22:18.725030651 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003dd760)} Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.740 [INFO][5465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.740 [INFO][5465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.740 [INFO][5465] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.745 [INFO][5465] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.759 [INFO][5465] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.767 [INFO][5465] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.771 [INFO][5465] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.775 [INFO][5465] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.775 [INFO][5465] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.779 [INFO][5465] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3 Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.796 [INFO][5465] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.818 [INFO][5465] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.134/26] block=192.168.122.128/26 handle="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.818 [INFO][5465] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.134/26] handle="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" host="ip-172-31-17-163" Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.818 [INFO][5465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:18.864466 containerd[2002]: 2026-04-17 00:22:18.819 [INFO][5465] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.134/26] IPv6=[] ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" HandleID="k8s-pod-network.87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Workload="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.865792 containerd[2002]: 2026-04-17 00:22:18.822 [INFO][5451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0", GenerateName:"calico-apiserver-878d7484f-", Namespace:"calico-system", SelfLink:"", UID:"4c4701d9-9047-4448-a009-ce8fbc675f90", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878d7484f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"calico-apiserver-878d7484f-ngljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9892f0336d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:18.865792 containerd[2002]: 2026-04-17 00:22:18.823 [INFO][5451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.134/32] ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.865792 containerd[2002]: 2026-04-17 00:22:18.823 [INFO][5451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9892f0336d ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.865792 containerd[2002]: 2026-04-17 00:22:18.836 [INFO][5451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.865792 containerd[2002]: 2026-04-17 00:22:18.839 [INFO][5451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0", GenerateName:"calico-apiserver-878d7484f-", Namespace:"calico-system", SelfLink:"", UID:"4c4701d9-9047-4448-a009-ce8fbc675f90", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878d7484f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3", Pod:"calico-apiserver-878d7484f-ngljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9892f0336d", MAC:"1e:ab:ae:a6:8a:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:18.865792 containerd[2002]: 2026-04-17 00:22:18.859 [INFO][5451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" Namespace="calico-system" Pod="calico-apiserver-878d7484f-ngljs" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--ngljs-eth0" Apr 17 00:22:18.959480 containerd[2002]: time="2026-04-17T00:22:18.958822571Z" level=info msg="connecting to shim 87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3" address="unix:///run/containerd/s/17e860799abec9471801df772c3972d246a5ca3f3d490d650c091e85457a88df" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:19.018143 systemd[1]: Started cri-containerd-87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3.scope - libcontainer container 87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3. Apr 17 00:22:19.153173 containerd[2002]: time="2026-04-17T00:22:19.152701922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-ngljs,Uid:4c4701d9-9047-4448-a009-ce8fbc675f90,Namespace:calico-system,Attempt:0,} returns sandbox id \"87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3\"" Apr 17 00:22:19.559689 containerd[2002]: time="2026-04-17T00:22:19.559646680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krz7c,Uid:faf227a2-c41a-476d-ac2e-763e2502ebdb,Namespace:kube-system,Attempt:0,}" Apr 17 00:22:19.573122 systemd-networkd[1780]: calia2b05eed4c8: Gained IPv6LL Apr 17 00:22:19.592464 containerd[2002]: time="2026-04-17T00:22:19.587678008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-md97m,Uid:2f68e2cd-6388-416b-9cb5-5cf309947192,Namespace:calico-system,Attempt:0,}" Apr 17 00:22:19.738624 sshd[5470]: Accepted publickey for core from 50.85.169.122 port 52586 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:19.744302 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:19.758065 systemd-logind[1959]: New session 8 of user core. Apr 17 00:22:19.763923 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 00:22:19.891469 systemd-networkd[1780]: calic9897323c4f: Gained IPv6LL Apr 17 00:22:19.960325 systemd-networkd[1780]: caliccddcf93129: Link UP Apr 17 00:22:19.962888 systemd-networkd[1780]: caliccddcf93129: Gained carrier Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.712 [INFO][5557] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0 coredns-66bc5c9577- kube-system faf227a2-c41a-476d-ac2e-763e2502ebdb 887 0 2026-04-17 00:21:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-163 coredns-66bc5c9577-krz7c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliccddcf93129 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.712 [INFO][5557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.789 [INFO][5585] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" HandleID="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Workload="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.820 [INFO][5585] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" HandleID="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Workload="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbe80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-163", "pod":"coredns-66bc5c9577-krz7c", "timestamp":"2026-04-17 00:22:19.789250085 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000194000)} Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.820 [INFO][5585] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.821 [INFO][5585] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.821 [INFO][5585] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.835 [INFO][5585] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.864 [INFO][5585] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.885 [INFO][5585] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.892 [INFO][5585] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.900 [INFO][5585] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.900 [INFO][5585] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.908 [INFO][5585] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3 Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.923 [INFO][5585] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.941 [INFO][5585] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.135/26] block=192.168.122.128/26 handle="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.942 [INFO][5585] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.135/26] handle="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" host="ip-172-31-17-163" Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.942 [INFO][5585] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:19.999768 containerd[2002]: 2026-04-17 00:22:19.942 [INFO][5585] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.135/26] IPv6=[] ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" HandleID="k8s-pod-network.0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Workload="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:20.002696 containerd[2002]: 2026-04-17 00:22:19.948 [INFO][5557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"faf227a2-c41a-476d-ac2e-763e2502ebdb", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"coredns-66bc5c9577-krz7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccddcf93129", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:20.002696 containerd[2002]: 2026-04-17 00:22:19.949 [INFO][5557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.135/32] ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:20.002696 containerd[2002]: 2026-04-17 00:22:19.949 [INFO][5557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccddcf93129 ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:20.002696 containerd[2002]: 2026-04-17 00:22:19.959 [INFO][5557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:20.002696 containerd[2002]: 2026-04-17 00:22:19.960 [INFO][5557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"faf227a2-c41a-476d-ac2e-763e2502ebdb", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3", Pod:"coredns-66bc5c9577-krz7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccddcf93129", MAC:"6a:1e:cf:1e:eb:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:20.002696 containerd[2002]: 2026-04-17 00:22:19.995 [INFO][5557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" Namespace="kube-system" Pod="coredns-66bc5c9577-krz7c" WorkloadEndpoint="ip--172--31--17--163-k8s-coredns--66bc5c9577--krz7c-eth0" Apr 17 00:22:20.121513 systemd-networkd[1780]: calib68cf9680bc: Link UP Apr 17 00:22:20.128445 systemd-networkd[1780]: calib68cf9680bc: Gained carrier Apr 17 00:22:20.179811 containerd[2002]: time="2026-04-17T00:22:20.177876012Z" level=info msg="connecting to shim 0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3" address="unix:///run/containerd/s/4c0839527e76bcc3fb13f15a82efdb37e4e19a578d56da00b2b601a6382b4341" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.785 [INFO][5570] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0 calico-apiserver-878d7484f- calico-system 2f68e2cd-6388-416b-9cb5-5cf309947192 883 0 2026-04-17 00:21:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:878d7484f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-163 calico-apiserver-878d7484f-md97m eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib68cf9680bc [] [] }} ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.785 [INFO][5570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.865 [INFO][5595] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" HandleID="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Workload="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.889 [INFO][5595] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" HandleID="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Workload="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-163", "pod":"calico-apiserver-878d7484f-md97m", "timestamp":"2026-04-17 00:22:19.865498614 +0000 UTC"}, Hostname:"ip-172-31-17-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004626e0)} Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.889 [INFO][5595] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.943 [INFO][5595] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.943 [INFO][5595] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-163' Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:19.953 [INFO][5595] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.011 [INFO][5595] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.038 [INFO][5595] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.047 [INFO][5595] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.054 [INFO][5595] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.054 [INFO][5595] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.059 [INFO][5595] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41 Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.068 [INFO][5595] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.089 [INFO][5595] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.136/26] block=192.168.122.128/26 handle="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.089 [INFO][5595] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.136/26] handle="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" host="ip-172-31-17-163" Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.089 [INFO][5595] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:22:20.186687 containerd[2002]: 2026-04-17 00:22:20.089 [INFO][5595] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.136/26] IPv6=[] ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" HandleID="k8s-pod-network.3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Workload="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.189323 containerd[2002]: 2026-04-17 00:22:20.110 [INFO][5570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0", GenerateName:"calico-apiserver-878d7484f-", Namespace:"calico-system", SelfLink:"", UID:"2f68e2cd-6388-416b-9cb5-5cf309947192", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878d7484f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"", Pod:"calico-apiserver-878d7484f-md97m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib68cf9680bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:20.189323 containerd[2002]: 2026-04-17 00:22:20.115 [INFO][5570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.136/32] ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.189323 containerd[2002]: 2026-04-17 00:22:20.117 [INFO][5570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib68cf9680bc ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.189323 containerd[2002]: 2026-04-17 00:22:20.119 [INFO][5570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.189323 containerd[2002]: 2026-04-17 00:22:20.134 [INFO][5570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0", GenerateName:"calico-apiserver-878d7484f-", Namespace:"calico-system", SelfLink:"", UID:"2f68e2cd-6388-416b-9cb5-5cf309947192", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878d7484f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-163", ContainerID:"3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41", Pod:"calico-apiserver-878d7484f-md97m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib68cf9680bc", MAC:"62:95:35:fc:33:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:22:20.189323 containerd[2002]: 2026-04-17 00:22:20.161 [INFO][5570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" Namespace="calico-system" Pod="calico-apiserver-878d7484f-md97m" WorkloadEndpoint="ip--172--31--17--163-k8s-calico--apiserver--878d7484f--md97m-eth0" Apr 17 00:22:20.311361 containerd[2002]: time="2026-04-17T00:22:20.308702767Z" level=info msg="connecting to shim 3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41" address="unix:///run/containerd/s/1d196768f4a320a6ac2d445e0fe478f5e89414ef65c3574098e7acdac477e459" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:22:20.339211 systemd-networkd[1780]: calid9892f0336d: Gained IPv6LL Apr 17 00:22:20.397180 systemd[1]: Started cri-containerd-0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3.scope - libcontainer container 0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3. Apr 17 00:22:20.444217 systemd[1]: Started cri-containerd-3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41.scope - libcontainer container 3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41. Apr 17 00:22:20.561115 containerd[2002]: time="2026-04-17T00:22:20.561072444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krz7c,Uid:faf227a2-c41a-476d-ac2e-763e2502ebdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3\"" Apr 17 00:22:20.628468 containerd[2002]: time="2026-04-17T00:22:20.628322127Z" level=info msg="CreateContainer within sandbox \"0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 00:22:20.652958 containerd[2002]: time="2026-04-17T00:22:20.652275520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878d7484f-md97m,Uid:2f68e2cd-6388-416b-9cb5-5cf309947192,Namespace:calico-system,Attempt:0,} returns sandbox id \"3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41\"" Apr 17 00:22:20.675391 containerd[2002]: time="2026-04-17T00:22:20.675157663Z" level=info msg="Container be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:20.691245 containerd[2002]: time="2026-04-17T00:22:20.690615426Z" level=info msg="CreateContainer within sandbox \"0f88213eb604b5d8d13d4274563ea1dc34a80d1a61769333fa960d1084e6bab3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913\"" Apr 17 00:22:20.695034 containerd[2002]: time="2026-04-17T00:22:20.691497342Z" level=info msg="StartContainer for \"be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913\"" Apr 17 00:22:20.695034 containerd[2002]: time="2026-04-17T00:22:20.692623190Z" level=info msg="connecting to shim be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913" address="unix:///run/containerd/s/4c0839527e76bcc3fb13f15a82efdb37e4e19a578d56da00b2b601a6382b4341" protocol=ttrpc version=3 Apr 17 00:22:20.735197 systemd[1]: Started cri-containerd-be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913.scope - libcontainer container be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913. Apr 17 00:22:20.832833 containerd[2002]: time="2026-04-17T00:22:20.832789427Z" level=info msg="StartContainer for \"be93357b67bbe8d637d5319f44f4a2cb292f7fa915e25ce5a1394569cc2e7913\" returns successfully" Apr 17 00:22:21.428047 systemd-networkd[1780]: calib68cf9680bc: Gained IPv6LL Apr 17 00:22:21.456695 sshd[5590]: Connection closed by 50.85.169.122 port 52586 Apr 17 00:22:21.458965 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:21.470670 systemd-logind[1959]: Session 8 logged out. Waiting for processes to exit. Apr 17 00:22:21.471711 systemd[1]: sshd@7-172.31.17.163:22-50.85.169.122:52586.service: Deactivated successfully. Apr 17 00:22:21.474522 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 00:22:21.476518 systemd-logind[1959]: Removed session 8. Apr 17 00:22:21.485502 containerd[2002]: time="2026-04-17T00:22:21.485451193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:21.488611 containerd[2002]: time="2026-04-17T00:22:21.488567947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 00:22:21.489840 containerd[2002]: time="2026-04-17T00:22:21.489803145Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:21.493369 containerd[2002]: time="2026-04-17T00:22:21.493304371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:21.494643 containerd[2002]: time="2026-04-17T00:22:21.494126598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.498476514s" Apr 17 00:22:21.494643 containerd[2002]: time="2026-04-17T00:22:21.494165495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 00:22:21.495565 containerd[2002]: time="2026-04-17T00:22:21.495421824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 00:22:21.561973 kubelet[3317]: I0417 00:22:21.561906 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-krz7c" podStartSLOduration=59.561882312 podStartE2EDuration="59.561882312s" podCreationTimestamp="2026-04-17 00:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:22:21.543986106 +0000 UTC m=+65.176672949" watchObservedRunningTime="2026-04-17 00:22:21.561882312 +0000 UTC m=+65.194569155" Apr 17 00:22:21.586816 containerd[2002]: time="2026-04-17T00:22:21.586753771Z" level=info msg="CreateContainer within sandbox \"4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 00:22:21.614848 containerd[2002]: time="2026-04-17T00:22:21.613971297Z" level=info msg="Container 8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:21.642402 containerd[2002]: time="2026-04-17T00:22:21.642351414Z" level=info msg="CreateContainer within sandbox \"4829a1ad5a9afe1c05afde46d31655f05f4f0777d8c5b92a05ba28e049e51f6f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2\"" Apr 17 00:22:21.643993 containerd[2002]: time="2026-04-17T00:22:21.643959612Z" level=info msg="StartContainer for \"8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2\"" Apr 17 00:22:21.647233 containerd[2002]: time="2026-04-17T00:22:21.647200733Z" level=info msg="connecting to shim 8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2" address="unix:///run/containerd/s/243b2b66c032d0ee2c7a44dfa7ebe9ccb82f1966175739e686b05a7c6f6c16c2" protocol=ttrpc version=3 Apr 17 00:22:21.686953 systemd[1]: Started cri-containerd-8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2.scope - libcontainer container 8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2. Apr 17 00:22:21.747626 systemd-networkd[1780]: caliccddcf93129: Gained IPv6LL Apr 17 00:22:21.764552 containerd[2002]: time="2026-04-17T00:22:21.764500341Z" level=info msg="StartContainer for \"8e5929266defcf40d8c18d10504d096c125ae55f94dff28382875fe865b107b2\" returns successfully" Apr 17 00:22:23.724997 kubelet[3317]: I0417 00:22:23.724914 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cdb595876-j56j2" podStartSLOduration=40.224995973 podStartE2EDuration="44.724865692s" podCreationTimestamp="2026-04-17 00:21:39 +0000 UTC" firstStartedPulling="2026-04-17 00:22:16.995274977 +0000 UTC m=+60.627961811" lastFinishedPulling="2026-04-17 00:22:21.495144693 +0000 UTC m=+65.127831530" observedRunningTime="2026-04-17 00:22:22.602235119 +0000 UTC m=+66.234921964" watchObservedRunningTime="2026-04-17 00:22:23.724865692 +0000 UTC m=+67.357552550" Apr 17 00:22:24.407888 ntpd[2136]: Listen normally on 9 calib3d96c58243 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 9 calib3d96c58243 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 10 cali716abefe93e [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 11 calia2b05eed4c8 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 12 calic9897323c4f [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 13 calid9892f0336d [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 14 caliccddcf93129 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 00:22:24.409167 ntpd[2136]: 17 Apr 00:22:24 ntpd[2136]: Listen normally on 15 calib68cf9680bc [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 00:22:24.407968 ntpd[2136]: Listen normally on 10 cali716abefe93e [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 00:22:24.407995 ntpd[2136]: Listen normally on 11 calia2b05eed4c8 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 00:22:24.408499 ntpd[2136]: Listen normally on 12 calic9897323c4f [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 00:22:24.408528 ntpd[2136]: Listen normally on 13 calid9892f0336d [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 00:22:24.408553 ntpd[2136]: Listen normally on 14 caliccddcf93129 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 00:22:24.408584 ntpd[2136]: Listen normally on 15 calib68cf9680bc [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 00:22:26.100318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008624694.mount: Deactivated successfully. Apr 17 00:22:26.639826 systemd[1]: Started sshd@8-172.31.17.163:22-50.85.169.122:46752.service - OpenSSH per-connection server daemon (50.85.169.122:46752). Apr 17 00:22:26.970094 containerd[2002]: time="2026-04-17T00:22:26.970036743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:26.971304 containerd[2002]: time="2026-04-17T00:22:26.971251721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 00:22:26.973887 containerd[2002]: time="2026-04-17T00:22:26.973643601Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:26.983648 containerd[2002]: time="2026-04-17T00:22:26.983593158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:26.984618 containerd[2002]: time="2026-04-17T00:22:26.984259583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.488578266s" Apr 17 00:22:26.984618 containerd[2002]: time="2026-04-17T00:22:26.984296537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 00:22:27.023255 containerd[2002]: time="2026-04-17T00:22:27.022990105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 00:22:27.080097 containerd[2002]: time="2026-04-17T00:22:27.080054061Z" level=info msg="CreateContainer within sandbox \"e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 00:22:27.105955 containerd[2002]: time="2026-04-17T00:22:27.103806653Z" level=info msg="Container 2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:27.109359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441984552.mount: Deactivated successfully. Apr 17 00:22:27.138280 containerd[2002]: time="2026-04-17T00:22:27.138104966Z" level=info msg="CreateContainer within sandbox \"e05ccaf8aa1dd6df65680f8788bdc6b79296896c81fdfc04706846c0a53aedc0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b\"" Apr 17 00:22:27.152298 containerd[2002]: time="2026-04-17T00:22:27.152265927Z" level=info msg="StartContainer for \"2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b\"" Apr 17 00:22:27.156745 containerd[2002]: time="2026-04-17T00:22:27.156675213Z" level=info msg="connecting to shim 2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b" address="unix:///run/containerd/s/c10378c2b363fd7e36a764dfd8aa1ce133f7780d2dc39f6384b9be1440fc6c24" protocol=ttrpc version=3 Apr 17 00:22:27.303243 systemd[1]: Started cri-containerd-2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b.scope - libcontainer container 2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b. Apr 17 00:22:27.512075 containerd[2002]: time="2026-04-17T00:22:27.511867289Z" level=info msg="StartContainer for \"2886f9e29951bd177bc7dd700f51eea593e12bc0b328e82fb9dd480d5d29063b\" returns successfully" Apr 17 00:22:27.642074 kubelet[3317]: I0417 00:22:27.641918 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-8djnz" podStartSLOduration=40.660851222 podStartE2EDuration="49.641891702s" podCreationTimestamp="2026-04-17 00:21:38 +0000 UTC" firstStartedPulling="2026-04-17 00:22:18.030967852 +0000 UTC m=+61.663654673" lastFinishedPulling="2026-04-17 00:22:27.012008333 +0000 UTC m=+70.644695153" observedRunningTime="2026-04-17 00:22:27.641684084 +0000 UTC m=+71.274370927" watchObservedRunningTime="2026-04-17 00:22:27.641891702 +0000 UTC m=+71.274578540" Apr 17 00:22:27.689758 sshd[5871]: Accepted publickey for core from 50.85.169.122 port 46752 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:27.692272 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:27.703346 systemd-logind[1959]: New session 9 of user core. Apr 17 00:22:27.709138 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 00:22:29.232568 containerd[2002]: time="2026-04-17T00:22:29.230814850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:29.234554 containerd[2002]: time="2026-04-17T00:22:29.234500602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 00:22:29.236494 containerd[2002]: time="2026-04-17T00:22:29.236453967Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:29.241424 containerd[2002]: time="2026-04-17T00:22:29.241377711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:29.250994 containerd[2002]: time="2026-04-17T00:22:29.250775910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.227539595s" Apr 17 00:22:29.250994 containerd[2002]: time="2026-04-17T00:22:29.250823060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 00:22:29.254778 containerd[2002]: time="2026-04-17T00:22:29.254257974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 00:22:29.298018 containerd[2002]: time="2026-04-17T00:22:29.297908915Z" level=info msg="CreateContainer within sandbox \"669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 00:22:29.359568 containerd[2002]: time="2026-04-17T00:22:29.358180658Z" level=info msg="Container f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:29.409496 containerd[2002]: time="2026-04-17T00:22:29.409075478Z" level=info msg="CreateContainer within sandbox \"669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de\"" Apr 17 00:22:29.423215 containerd[2002]: time="2026-04-17T00:22:29.423137984Z" level=info msg="StartContainer for \"f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de\"" Apr 17 00:22:29.425404 containerd[2002]: time="2026-04-17T00:22:29.425129454Z" level=info msg="connecting to shim f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de" address="unix:///run/containerd/s/0b6b83aee491faa1b4194fb149bf8e0ff8152aacd58863a04d4e9759cd60ae60" protocol=ttrpc version=3 Apr 17 00:22:29.462060 systemd[1]: Started cri-containerd-f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de.scope - libcontainer container f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de. Apr 17 00:22:29.584980 sshd[5944]: Connection closed by 50.85.169.122 port 46752 Apr 17 00:22:29.587376 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:29.599641 systemd[1]: sshd@8-172.31.17.163:22-50.85.169.122:46752.service: Deactivated successfully. Apr 17 00:22:29.605363 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 00:22:29.612943 systemd-logind[1959]: Session 9 logged out. Waiting for processes to exit. Apr 17 00:22:29.616685 systemd-logind[1959]: Removed session 9. Apr 17 00:22:29.618858 containerd[2002]: time="2026-04-17T00:22:29.618792406Z" level=info msg="StartContainer for \"f7b192df03672476dac1dde0d20c2278e9826ce241af3b8cc0500a225c8d62de\" returns successfully" Apr 17 00:22:31.881410 containerd[2002]: time="2026-04-17T00:22:31.881362267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:31.883127 containerd[2002]: time="2026-04-17T00:22:31.883088955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 00:22:31.885815 containerd[2002]: time="2026-04-17T00:22:31.885426436Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:31.890185 containerd[2002]: time="2026-04-17T00:22:31.889148890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:31.893602 containerd[2002]: time="2026-04-17T00:22:31.891152958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.636856418s" Apr 17 00:22:31.893602 containerd[2002]: time="2026-04-17T00:22:31.891190512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 00:22:31.893602 containerd[2002]: time="2026-04-17T00:22:31.893176200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 00:22:31.900563 containerd[2002]: time="2026-04-17T00:22:31.899743085Z" level=info msg="CreateContainer within sandbox \"87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 00:22:31.908810 containerd[2002]: time="2026-04-17T00:22:31.908626846Z" level=info msg="Container fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:31.939066 containerd[2002]: time="2026-04-17T00:22:31.939012354Z" level=info msg="CreateContainer within sandbox \"87179a3d9ba67b38fdfeadc043393497a3841ae31864861629d24f646bbb00f3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef\"" Apr 17 00:22:31.940069 containerd[2002]: time="2026-04-17T00:22:31.940038735Z" level=info msg="StartContainer for \"fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef\"" Apr 17 00:22:31.942653 containerd[2002]: time="2026-04-17T00:22:31.942156899Z" level=info msg="connecting to shim fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef" address="unix:///run/containerd/s/17e860799abec9471801df772c3972d246a5ca3f3d490d650c091e85457a88df" protocol=ttrpc version=3 Apr 17 00:22:31.974141 systemd[1]: Started cri-containerd-fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef.scope - libcontainer container fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef. Apr 17 00:22:32.043070 containerd[2002]: time="2026-04-17T00:22:32.043025974Z" level=info msg="StartContainer for \"fdaa2d3c61ba495122c592cd1a9e21563bd1843c873c45f2689f006e79295cef\" returns successfully" Apr 17 00:22:32.259029 containerd[2002]: time="2026-04-17T00:22:32.258975451Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:32.261791 containerd[2002]: time="2026-04-17T00:22:32.261749779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 00:22:32.264735 containerd[2002]: time="2026-04-17T00:22:32.264622598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 371.413155ms" Apr 17 00:22:32.264735 containerd[2002]: time="2026-04-17T00:22:32.264675638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 00:22:32.267279 containerd[2002]: time="2026-04-17T00:22:32.266220670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 00:22:32.273467 containerd[2002]: time="2026-04-17T00:22:32.273432906Z" level=info msg="CreateContainer within sandbox \"3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 00:22:32.296745 containerd[2002]: time="2026-04-17T00:22:32.293849718Z" level=info msg="Container a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:32.304639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286127411.mount: Deactivated successfully. Apr 17 00:22:32.320481 containerd[2002]: time="2026-04-17T00:22:32.320422965Z" level=info msg="CreateContainer within sandbox \"3bee781eb479dd136cfeca19a48270ff5c8ca1c8ae91c7565dad9ca952fb1c41\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da\"" Apr 17 00:22:32.341520 containerd[2002]: time="2026-04-17T00:22:32.341463291Z" level=info msg="StartContainer for \"a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da\"" Apr 17 00:22:32.349931 containerd[2002]: time="2026-04-17T00:22:32.349854761Z" level=info msg="connecting to shim a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da" address="unix:///run/containerd/s/1d196768f4a320a6ac2d445e0fe478f5e89414ef65c3574098e7acdac477e459" protocol=ttrpc version=3 Apr 17 00:22:32.387954 systemd[1]: Started cri-containerd-a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da.scope - libcontainer container a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da. Apr 17 00:22:32.494949 containerd[2002]: time="2026-04-17T00:22:32.494779314Z" level=info msg="StartContainer for \"a26ede869a09cedd72aa3d885ca1fd2742b24de51bcb1bdbc50619ba9d1700da\" returns successfully" Apr 17 00:22:33.023369 kubelet[3317]: I0417 00:22:33.023288 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-878d7484f-ngljs" podStartSLOduration=42.286173999 podStartE2EDuration="55.023265045s" podCreationTimestamp="2026-04-17 00:21:38 +0000 UTC" firstStartedPulling="2026-04-17 00:22:19.154909893 +0000 UTC m=+62.787596722" lastFinishedPulling="2026-04-17 00:22:31.892000946 +0000 UTC m=+75.524687768" observedRunningTime="2026-04-17 00:22:33.019011242 +0000 UTC m=+76.651698089" watchObservedRunningTime="2026-04-17 00:22:33.023265045 +0000 UTC m=+76.655951889" Apr 17 00:22:34.258827 kubelet[3317]: I0417 00:22:34.258405 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-878d7484f-md97m" podStartSLOduration=44.648021737 podStartE2EDuration="56.25838294s" podCreationTimestamp="2026-04-17 00:21:38 +0000 UTC" firstStartedPulling="2026-04-17 00:22:20.655570201 +0000 UTC m=+64.288257031" lastFinishedPulling="2026-04-17 00:22:32.265931408 +0000 UTC m=+75.898618234" observedRunningTime="2026-04-17 00:22:33.090247991 +0000 UTC m=+76.722934836" watchObservedRunningTime="2026-04-17 00:22:34.25838294 +0000 UTC m=+77.891069783" Apr 17 00:22:34.873187 systemd[1]: Started sshd@9-172.31.17.163:22-50.85.169.122:44928.service - OpenSSH per-connection server daemon (50.85.169.122:44928). Apr 17 00:22:35.312960 containerd[2002]: time="2026-04-17T00:22:35.312907433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:35.351222 containerd[2002]: time="2026-04-17T00:22:35.351071169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 00:22:35.371084 containerd[2002]: time="2026-04-17T00:22:35.370917242Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:35.378795 containerd[2002]: time="2026-04-17T00:22:35.378743236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:22:35.384657 containerd[2002]: time="2026-04-17T00:22:35.384610658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.117374211s" Apr 17 00:22:35.384657 containerd[2002]: time="2026-04-17T00:22:35.384652425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 00:22:35.398961 containerd[2002]: time="2026-04-17T00:22:35.398897096Z" level=info msg="CreateContainer within sandbox \"669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 00:22:35.450854 containerd[2002]: time="2026-04-17T00:22:35.443111435Z" level=info msg="Container 1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:22:35.466612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount116678721.mount: Deactivated successfully. Apr 17 00:22:35.503933 containerd[2002]: time="2026-04-17T00:22:35.503888020Z" level=info msg="CreateContainer within sandbox \"669910a5449f71f8dafae17dfa7045ad962041f7980134b311df131dd6c1d9d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb\"" Apr 17 00:22:35.516367 containerd[2002]: time="2026-04-17T00:22:35.516320714Z" level=info msg="StartContainer for \"1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb\"" Apr 17 00:22:35.518957 containerd[2002]: time="2026-04-17T00:22:35.518824878Z" level=info msg="connecting to shim 1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb" address="unix:///run/containerd/s/0b6b83aee491faa1b4194fb149bf8e0ff8152aacd58863a04d4e9759cd60ae60" protocol=ttrpc version=3 Apr 17 00:22:35.591227 systemd[1]: Started cri-containerd-1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb.scope - libcontainer container 1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb. Apr 17 00:22:35.685679 containerd[2002]: time="2026-04-17T00:22:35.685623619Z" level=info msg="StartContainer for \"1718023eb7296bc80406a33c9bc7ad9ff9d21c74726bb89861a28081469342cb\" returns successfully" Apr 17 00:22:36.217168 kubelet[3317]: I0417 00:22:36.204142 3317 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 00:22:36.226077 kubelet[3317]: I0417 00:22:36.225928 3317 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 00:22:36.336429 sshd[6120]: Accepted publickey for core from 50.85.169.122 port 44928 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:36.343837 sshd-session[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:36.357828 systemd-logind[1959]: New session 10 of user core. Apr 17 00:22:36.363604 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 00:22:38.067256 sshd[6179]: Connection closed by 50.85.169.122 port 44928 Apr 17 00:22:38.068104 sshd-session[6120]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:38.076363 systemd-logind[1959]: Session 10 logged out. Waiting for processes to exit. Apr 17 00:22:38.076995 systemd[1]: sshd@9-172.31.17.163:22-50.85.169.122:44928.service: Deactivated successfully. Apr 17 00:22:38.079828 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 00:22:38.082535 systemd-logind[1959]: Removed session 10. Apr 17 00:22:38.244344 systemd[1]: Started sshd@10-172.31.17.163:22-50.85.169.122:44940.service - OpenSSH per-connection server daemon (50.85.169.122:44940). Apr 17 00:22:39.178623 sshd[6213]: Accepted publickey for core from 50.85.169.122 port 44940 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:39.180837 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:39.190593 systemd-logind[1959]: New session 11 of user core. Apr 17 00:22:39.199231 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 00:22:40.007757 sshd[6216]: Connection closed by 50.85.169.122 port 44940 Apr 17 00:22:40.010621 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:40.022931 systemd[1]: sshd@10-172.31.17.163:22-50.85.169.122:44940.service: Deactivated successfully. Apr 17 00:22:40.027908 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 00:22:40.036239 systemd-logind[1959]: Session 11 logged out. Waiting for processes to exit. Apr 17 00:22:40.037801 systemd-logind[1959]: Removed session 11. Apr 17 00:22:40.191963 systemd[1]: Started sshd@11-172.31.17.163:22-50.85.169.122:54164.service - OpenSSH per-connection server daemon (50.85.169.122:54164). Apr 17 00:22:41.178798 sshd[6225]: Accepted publickey for core from 50.85.169.122 port 54164 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:41.182981 sshd-session[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:41.188804 systemd-logind[1959]: New session 12 of user core. Apr 17 00:22:41.196980 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 00:22:41.965954 sshd[6235]: Connection closed by 50.85.169.122 port 54164 Apr 17 00:22:41.966783 sshd-session[6225]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:41.971986 systemd[1]: sshd@11-172.31.17.163:22-50.85.169.122:54164.service: Deactivated successfully. Apr 17 00:22:41.974593 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 00:22:41.975839 systemd-logind[1959]: Session 12 logged out. Waiting for processes to exit. Apr 17 00:22:41.977536 systemd-logind[1959]: Removed session 12. Apr 17 00:22:47.132166 systemd[1]: Started sshd@12-172.31.17.163:22-50.85.169.122:54180.service - OpenSSH per-connection server daemon (50.85.169.122:54180). Apr 17 00:22:48.058290 sshd[6270]: Accepted publickey for core from 50.85.169.122 port 54180 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:48.060460 sshd-session[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:48.065797 systemd-logind[1959]: New session 13 of user core. Apr 17 00:22:48.072987 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 00:22:48.894935 sshd[6273]: Connection closed by 50.85.169.122 port 54180 Apr 17 00:22:48.897539 sshd-session[6270]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:48.903192 systemd-logind[1959]: Session 13 logged out. Waiting for processes to exit. Apr 17 00:22:48.903954 systemd[1]: sshd@12-172.31.17.163:22-50.85.169.122:54180.service: Deactivated successfully. Apr 17 00:22:48.907015 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 00:22:48.909390 systemd-logind[1959]: Removed session 13. Apr 17 00:22:49.082235 systemd[1]: Started sshd@13-172.31.17.163:22-50.85.169.122:54182.service - OpenSSH per-connection server daemon (50.85.169.122:54182). Apr 17 00:22:50.005356 sshd[6284]: Accepted publickey for core from 50.85.169.122 port 54182 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:50.006845 sshd-session[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:50.013006 systemd-logind[1959]: New session 14 of user core. Apr 17 00:22:50.019350 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 00:22:51.094346 sshd[6297]: Connection closed by 50.85.169.122 port 54182 Apr 17 00:22:51.098154 sshd-session[6284]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:51.109245 systemd-logind[1959]: Session 14 logged out. Waiting for processes to exit. Apr 17 00:22:51.110090 systemd[1]: sshd@13-172.31.17.163:22-50.85.169.122:54182.service: Deactivated successfully. Apr 17 00:22:51.112589 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 00:22:51.115583 systemd-logind[1959]: Removed session 14. Apr 17 00:22:51.270039 systemd[1]: Started sshd@14-172.31.17.163:22-50.85.169.122:42994.service - OpenSSH per-connection server daemon (50.85.169.122:42994). Apr 17 00:22:52.169019 sshd[6307]: Accepted publickey for core from 50.85.169.122 port 42994 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:52.170941 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:52.177283 systemd-logind[1959]: New session 15 of user core. Apr 17 00:22:52.187033 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 00:22:53.729203 sshd[6310]: Connection closed by 50.85.169.122 port 42994 Apr 17 00:22:53.734713 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:53.750421 systemd[1]: sshd@14-172.31.17.163:22-50.85.169.122:42994.service: Deactivated successfully. Apr 17 00:22:53.750674 systemd-logind[1959]: Session 15 logged out. Waiting for processes to exit. Apr 17 00:22:53.753434 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 00:22:53.756319 systemd-logind[1959]: Removed session 15. Apr 17 00:22:53.913011 systemd[1]: Started sshd@15-172.31.17.163:22-50.85.169.122:43000.service - OpenSSH per-connection server daemon (50.85.169.122:43000). Apr 17 00:22:54.854177 sshd[6356]: Accepted publickey for core from 50.85.169.122 port 43000 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:54.857006 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:54.863010 systemd-logind[1959]: New session 16 of user core. Apr 17 00:22:54.867918 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 00:22:56.170001 sshd[6361]: Connection closed by 50.85.169.122 port 43000 Apr 17 00:22:56.171922 sshd-session[6356]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:56.176792 systemd-logind[1959]: Session 16 logged out. Waiting for processes to exit. Apr 17 00:22:56.177798 systemd[1]: sshd@15-172.31.17.163:22-50.85.169.122:43000.service: Deactivated successfully. Apr 17 00:22:56.180703 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 00:22:56.182818 systemd-logind[1959]: Removed session 16. Apr 17 00:22:56.350046 systemd[1]: Started sshd@16-172.31.17.163:22-50.85.169.122:43016.service - OpenSSH per-connection server daemon (50.85.169.122:43016). Apr 17 00:22:57.285328 sshd[6371]: Accepted publickey for core from 50.85.169.122 port 43016 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:22:57.287634 sshd-session[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:22:57.292649 systemd-logind[1959]: New session 17 of user core. Apr 17 00:22:57.298965 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 00:22:57.928974 sshd[6374]: Connection closed by 50.85.169.122 port 43016 Apr 17 00:22:57.930946 sshd-session[6371]: pam_unix(sshd:session): session closed for user core Apr 17 00:22:57.935356 systemd[1]: sshd@16-172.31.17.163:22-50.85.169.122:43016.service: Deactivated successfully. Apr 17 00:22:57.938800 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 00:22:57.940463 systemd-logind[1959]: Session 17 logged out. Waiting for processes to exit. Apr 17 00:22:57.942280 systemd-logind[1959]: Removed session 17. Apr 17 00:22:59.562364 kubelet[3317]: I0417 00:22:59.543390 3317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bvsrn" podStartSLOduration=63.393078472 podStartE2EDuration="1m20.524588155s" podCreationTimestamp="2026-04-17 00:21:39 +0000 UTC" firstStartedPulling="2026-04-17 00:22:18.254031388 +0000 UTC m=+61.886718219" lastFinishedPulling="2026-04-17 00:22:35.385541076 +0000 UTC m=+79.018227902" observedRunningTime="2026-04-17 00:22:36.971908678 +0000 UTC m=+80.604595523" watchObservedRunningTime="2026-04-17 00:22:59.524588155 +0000 UTC m=+103.157274999" Apr 17 00:23:03.106866 systemd[1]: Started sshd@17-172.31.17.163:22-50.85.169.122:49112.service - OpenSSH per-connection server daemon (50.85.169.122:49112). Apr 17 00:23:04.100070 sshd[6419]: Accepted publickey for core from 50.85.169.122 port 49112 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:23:04.103003 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:23:04.108603 systemd-logind[1959]: New session 18 of user core. Apr 17 00:23:04.117035 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 00:23:05.151034 sshd[6422]: Connection closed by 50.85.169.122 port 49112 Apr 17 00:23:05.151984 sshd-session[6419]: pam_unix(sshd:session): session closed for user core Apr 17 00:23:05.157099 systemd-logind[1959]: Session 18 logged out. Waiting for processes to exit. Apr 17 00:23:05.157933 systemd[1]: sshd@17-172.31.17.163:22-50.85.169.122:49112.service: Deactivated successfully. Apr 17 00:23:05.161561 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 00:23:05.166703 systemd-logind[1959]: Removed session 18. Apr 17 00:23:10.330408 systemd[1]: Started sshd@18-172.31.17.163:22-50.85.169.122:46816.service - OpenSSH per-connection server daemon (50.85.169.122:46816). Apr 17 00:23:11.277438 sshd[6457]: Accepted publickey for core from 50.85.169.122 port 46816 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:23:11.280745 sshd-session[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:23:11.292792 systemd-logind[1959]: New session 19 of user core. Apr 17 00:23:11.296240 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 00:23:12.665491 sshd[6460]: Connection closed by 50.85.169.122 port 46816 Apr 17 00:23:12.666407 sshd-session[6457]: pam_unix(sshd:session): session closed for user core Apr 17 00:23:12.681300 systemd[1]: sshd@18-172.31.17.163:22-50.85.169.122:46816.service: Deactivated successfully. Apr 17 00:23:12.686980 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 00:23:12.691909 systemd-logind[1959]: Session 19 logged out. Waiting for processes to exit. Apr 17 00:23:12.695368 systemd-logind[1959]: Removed session 19. Apr 17 00:23:17.846317 systemd[1]: Started sshd@19-172.31.17.163:22-50.85.169.122:46818.service - OpenSSH per-connection server daemon (50.85.169.122:46818). Apr 17 00:23:18.776772 sshd[6474]: Accepted publickey for core from 50.85.169.122 port 46818 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:23:18.777915 sshd-session[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:23:18.784378 systemd-logind[1959]: New session 20 of user core. Apr 17 00:23:18.789989 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 00:23:19.486666 sshd[6477]: Connection closed by 50.85.169.122 port 46818 Apr 17 00:23:19.490141 sshd-session[6474]: pam_unix(sshd:session): session closed for user core Apr 17 00:23:19.498291 systemd[1]: sshd@19-172.31.17.163:22-50.85.169.122:46818.service: Deactivated successfully. Apr 17 00:23:19.503859 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 00:23:19.506619 systemd-logind[1959]: Session 20 logged out. Waiting for processes to exit. Apr 17 00:23:19.510190 systemd-logind[1959]: Removed session 20. Apr 17 00:23:24.664068 systemd[1]: Started sshd@20-172.31.17.163:22-50.85.169.122:50992.service - OpenSSH per-connection server daemon (50.85.169.122:50992). Apr 17 00:23:25.586885 sshd[6513]: Accepted publickey for core from 50.85.169.122 port 50992 ssh2: RSA SHA256:Wn1bWdRXva+ZTDpuZ5i38vIIX/QMobuurL9Av6c2ILM Apr 17 00:23:25.589964 sshd-session[6513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:23:25.598531 systemd-logind[1959]: New session 21 of user core. Apr 17 00:23:25.601954 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 00:23:26.568234 sshd[6536]: Connection closed by 50.85.169.122 port 50992 Apr 17 00:23:26.569906 sshd-session[6513]: pam_unix(sshd:session): session closed for user core Apr 17 00:23:26.574198 systemd-logind[1959]: Session 21 logged out. Waiting for processes to exit. Apr 17 00:23:26.574935 systemd[1]: sshd@20-172.31.17.163:22-50.85.169.122:50992.service: Deactivated successfully. Apr 17 00:23:26.577812 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 00:23:26.580153 systemd-logind[1959]: Removed session 21. Apr 17 00:23:41.334492 systemd[1]: cri-containerd-b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43.scope: Deactivated successfully. Apr 17 00:23:41.336529 systemd[1]: cri-containerd-b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43.scope: Consumed 10.420s CPU time, 127.1M memory peak, 70.1M read from disk. Apr 17 00:23:41.559758 containerd[2002]: time="2026-04-17T00:23:41.549606712Z" level=info msg="received container exit event container_id:\"b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43\" id:\"b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43\" pid:3842 exit_status:1 exited_at:{seconds:1776385421 nanos:428170943}" Apr 17 00:23:41.776120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43-rootfs.mount: Deactivated successfully. Apr 17 00:23:41.987278 kubelet[3317]: I0417 00:23:41.987224 3317 scope.go:117] "RemoveContainer" containerID="b5e02a33cf1acccb1609ade99595b070d5aa443cdf360464998d59897d7d1f43" Apr 17 00:23:42.037689 containerd[2002]: time="2026-04-17T00:23:42.037484860Z" level=info msg="CreateContainer within sandbox \"e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 00:23:42.224745 containerd[2002]: time="2026-04-17T00:23:42.223775152Z" level=info msg="Container a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:23:42.249510 containerd[2002]: time="2026-04-17T00:23:42.249347589Z" level=info msg="CreateContainer within sandbox \"e9d94679a1930eff6d7c89af66800a8b07e568f0122c55e50a2c87146aaee8c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f\"" Apr 17 00:23:42.252605 containerd[2002]: time="2026-04-17T00:23:42.252562942Z" level=info msg="StartContainer for \"a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f\"" Apr 17 00:23:42.258082 containerd[2002]: time="2026-04-17T00:23:42.258029014Z" level=info msg="connecting to shim a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f" address="unix:///run/containerd/s/f982c24ea0bdaa46d0796efc40ada4e70ba41243fbc130284d1fe528039e4cc1" protocol=ttrpc version=3 Apr 17 00:23:42.336238 systemd[1]: Started cri-containerd-a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f.scope - libcontainer container a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f. Apr 17 00:23:42.392777 containerd[2002]: time="2026-04-17T00:23:42.392711897Z" level=info msg="StartContainer for \"a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f\" returns successfully" Apr 17 00:23:42.748192 systemd[1]: cri-containerd-c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f.scope: Deactivated successfully. Apr 17 00:23:42.749227 systemd[1]: cri-containerd-c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f.scope: Consumed 3.775s CPU time, 85.9M memory peak, 92.1M read from disk. Apr 17 00:23:42.752568 containerd[2002]: time="2026-04-17T00:23:42.752520487Z" level=info msg="received container exit event container_id:\"c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f\" id:\"c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f\" pid:3129 exit_status:1 exited_at:{seconds:1776385422 nanos:751351170}" Apr 17 00:23:42.786427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f-rootfs.mount: Deactivated successfully. Apr 17 00:23:42.983596 kubelet[3317]: I0417 00:23:42.983322 3317 scope.go:117] "RemoveContainer" containerID="c7df7105c20e1459238ea9aa38b3aae5426aafddba1d4cc9a7d2c4fb0a828d1f" Apr 17 00:23:42.986675 containerd[2002]: time="2026-04-17T00:23:42.986635235Z" level=info msg="CreateContainer within sandbox \"5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 00:23:43.014668 containerd[2002]: time="2026-04-17T00:23:43.014555645Z" level=info msg="Container 2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:23:43.015354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731087823.mount: Deactivated successfully. Apr 17 00:23:43.036443 containerd[2002]: time="2026-04-17T00:23:43.036160878Z" level=info msg="CreateContainer within sandbox \"5c7b6a28edd490839e36e779e89edfb4842163352bfa9da389dac7096f1b2a71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867\"" Apr 17 00:23:43.037326 containerd[2002]: time="2026-04-17T00:23:43.037292764Z" level=info msg="StartContainer for \"2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867\"" Apr 17 00:23:43.038606 containerd[2002]: time="2026-04-17T00:23:43.038576600Z" level=info msg="connecting to shim 2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867" address="unix:///run/containerd/s/79c6718fc71b86ab0b06aa70c6d0e3321275921faab83bbfc06de69c344c6d71" protocol=ttrpc version=3 Apr 17 00:23:43.064957 systemd[1]: Started cri-containerd-2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867.scope - libcontainer container 2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867. Apr 17 00:23:43.141091 containerd[2002]: time="2026-04-17T00:23:43.141051878Z" level=info msg="StartContainer for \"2bb6e6184112ee7485628b0c643663581d06e567fea1e172f89870c1ef6e0867\" returns successfully" Apr 17 00:23:46.307939 update_engine[1963]: I20260417 00:23:46.307857 1963 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 17 00:23:46.307939 update_engine[1963]: I20260417 00:23:46.307936 1963 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 17 00:23:46.315790 update_engine[1963]: I20260417 00:23:46.315735 1963 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 17 00:23:46.317467 update_engine[1963]: I20260417 00:23:46.317310 1963 omaha_request_params.cc:62] Current group set to stable Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323369 1963 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323414 1963 update_attempter.cc:643] Scheduling an action processor start. Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323443 1963 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323513 1963 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323611 1963 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323621 1963 omaha_request_action.cc:272] Request: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: Apr 17 00:23:46.324161 update_engine[1963]: I20260417 00:23:46.323629 1963 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:23:46.347605 locksmithd[2014]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 17 00:23:46.350270 update_engine[1963]: I20260417 00:23:46.350113 1963 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:23:46.351495 update_engine[1963]: I20260417 00:23:46.351454 1963 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:23:46.361048 update_engine[1963]: E20260417 00:23:46.360980 1963 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:23:46.361181 update_engine[1963]: I20260417 00:23:46.361114 1963 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 17 00:23:47.292393 systemd[1]: cri-containerd-6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43.scope: Deactivated successfully. Apr 17 00:23:47.292780 systemd[1]: cri-containerd-6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43.scope: Consumed 1.596s CPU time, 35.2M memory peak, 49.2M read from disk. Apr 17 00:23:47.298144 containerd[2002]: time="2026-04-17T00:23:47.298021181Z" level=info msg="received container exit event container_id:\"6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43\" id:\"6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43\" pid:3160 exit_status:1 exited_at:{seconds:1776385427 nanos:297474929}" Apr 17 00:23:47.360092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43-rootfs.mount: Deactivated successfully. Apr 17 00:23:48.029131 kubelet[3317]: I0417 00:23:48.029092 3317 scope.go:117] "RemoveContainer" containerID="6001b486a3dc77317ff812fa2f50b5f201934fdfaa7cd94ba2431bcce3ef5c43" Apr 17 00:23:48.032050 containerd[2002]: time="2026-04-17T00:23:48.031990123Z" level=info msg="CreateContainer within sandbox \"80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 00:23:48.052642 containerd[2002]: time="2026-04-17T00:23:48.052596147Z" level=info msg="Container d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:23:48.060897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188653246.mount: Deactivated successfully. Apr 17 00:23:48.068941 containerd[2002]: time="2026-04-17T00:23:48.068891949Z" level=info msg="CreateContainer within sandbox \"80a72d740b17c613af5c5b4dfabd355c3237381f1399321c29f0c463dbd1e21f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b\"" Apr 17 00:23:48.070759 containerd[2002]: time="2026-04-17T00:23:48.069738320Z" level=info msg="StartContainer for \"d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b\"" Apr 17 00:23:48.071163 containerd[2002]: time="2026-04-17T00:23:48.071131873Z" level=info msg="connecting to shim d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b" address="unix:///run/containerd/s/a0da552cce44bd85c09e26ca8225a5ba2fd181d36dfd4020718f45fb7453b65e" protocol=ttrpc version=3 Apr 17 00:23:48.098988 systemd[1]: Started cri-containerd-d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b.scope - libcontainer container d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b. Apr 17 00:23:48.158243 containerd[2002]: time="2026-04-17T00:23:48.158207661Z" level=info msg="StartContainer for \"d003f8e1a71d10e7181c0345ecabf0994419aeea907d0c80ead9519e74caf12b\" returns successfully" Apr 17 00:23:50.702093 kubelet[3317]: E0417 00:23:50.702023 3317 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:23:54.161059 systemd[1]: cri-containerd-a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f.scope: Deactivated successfully. Apr 17 00:23:54.161548 systemd[1]: cri-containerd-a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f.scope: Consumed 331ms CPU time, 85.5M memory peak, 46.2M read from disk. Apr 17 00:23:54.167980 containerd[2002]: time="2026-04-17T00:23:54.167938022Z" level=info msg="received container exit event container_id:\"a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f\" id:\"a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f\" pid:6676 exit_status:1 exited_at:{seconds:1776385434 nanos:167668829}" Apr 17 00:23:54.197346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a38c0c6268656e1a84ffc7575819b56887bb4130ca50005b010014cea271858f-rootfs.mount: Deactivated successfully.