Mar 6 03:01:20.885640 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:16:40 -00 2026 Mar 6 03:01:20.885676 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:01:20.885696 kernel: BIOS-provided physical RAM map: Mar 6 03:01:20.885708 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 03:01:20.885719 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 6 03:01:20.885731 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 6 03:01:20.885745 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 6 03:01:20.885758 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 6 03:01:20.885770 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 6 03:01:20.885783 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 6 03:01:20.885795 kernel: NX (Execute Disable) protection: active Mar 6 03:01:20.885810 kernel: APIC: Static calls initialized Mar 6 03:01:20.885822 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 6 03:01:20.885835 kernel: extended physical RAM map: Mar 6 03:01:20.885851 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 03:01:20.885864 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 6 03:01:20.885881 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 6 03:01:20.885895 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 6 03:01:20.885908 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 6 03:01:20.885922 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 6 03:01:20.885935 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 6 03:01:20.885949 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 6 03:01:20.885962 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 6 03:01:20.885976 kernel: efi: EFI v2.7 by EDK II Mar 6 03:01:20.885990 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 6 03:01:20.886003 kernel: secureboot: Secure boot disabled Mar 6 03:01:20.886016 kernel: SMBIOS 2.7 present. Mar 6 03:01:20.886032 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 6 03:01:20.886046 kernel: DMI: Memory slots populated: 1/1 Mar 6 03:01:20.886059 kernel: Hypervisor detected: KVM Mar 6 03:01:20.886073 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 6 03:01:20.886086 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 03:01:20.886100 kernel: kvm-clock: using sched offset of 5269479678 cycles Mar 6 03:01:20.886114 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 03:01:20.886128 kernel: tsc: Detected 2499.996 MHz processor Mar 6 03:01:20.886142 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 03:01:20.886156 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 03:01:20.886172 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 6 03:01:20.886186 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 03:01:20.886200 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 03:01:20.886220 kernel: Using GB pages for direct mapping Mar 6 03:01:20.886234 kernel: ACPI: Early table checksum verification disabled Mar 6 03:01:20.886248 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 6 03:01:20.886263 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 6 03:01:20.887320 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 6 03:01:20.887340 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 6 03:01:20.887355 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 6 03:01:20.887370 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 6 03:01:20.887385 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 6 03:01:20.887400 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 6 03:01:20.887415 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 6 03:01:20.887429 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 6 03:01:20.887449 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 6 03:01:20.887464 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 6 03:01:20.887478 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 6 03:01:20.887493 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 6 03:01:20.887508 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 6 03:01:20.887523 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 6 03:01:20.887537 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 6 03:01:20.887552 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 6 03:01:20.887569 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 6 03:01:20.887583 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 6 03:01:20.887598 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 6 03:01:20.887613 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 6 03:01:20.887628 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 6 03:01:20.887642 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 6 03:01:20.887657 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 6 03:01:20.887671 kernel: NUMA: Initialized distance table, cnt=1 Mar 6 03:01:20.887686 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Mar 6 03:01:20.887704 kernel: Zone ranges: Mar 6 03:01:20.887718 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 03:01:20.887733 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 6 03:01:20.887747 kernel: Normal empty Mar 6 03:01:20.887762 kernel: Device empty Mar 6 03:01:20.887776 kernel: Movable zone start for each node Mar 6 03:01:20.887791 kernel: Early memory node ranges Mar 6 03:01:20.887805 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 03:01:20.887820 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 6 03:01:20.887834 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 6 03:01:20.887852 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 6 03:01:20.887866 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 03:01:20.887881 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 03:01:20.887896 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 6 03:01:20.887911 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 6 03:01:20.887925 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 6 03:01:20.887940 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 03:01:20.887955 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 6 03:01:20.887969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 03:01:20.887987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 03:01:20.888002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 03:01:20.888017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 03:01:20.888032 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 03:01:20.888046 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 03:01:20.888060 kernel: TSC deadline timer available Mar 6 03:01:20.888075 kernel: CPU topo: Max. logical packages: 1 Mar 6 03:01:20.888089 kernel: CPU topo: Max. logical dies: 1 Mar 6 03:01:20.888104 kernel: CPU topo: Max. dies per package: 1 Mar 6 03:01:20.888118 kernel: CPU topo: Max. threads per core: 2 Mar 6 03:01:20.888136 kernel: CPU topo: Num. cores per package: 1 Mar 6 03:01:20.888151 kernel: CPU topo: Num. threads per package: 2 Mar 6 03:01:20.888166 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 6 03:01:20.888180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 03:01:20.888195 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 6 03:01:20.888210 kernel: Booting paravirtualized kernel on KVM Mar 6 03:01:20.888224 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 03:01:20.888239 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 6 03:01:20.888254 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 6 03:01:20.888271 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 6 03:01:20.889327 kernel: pcpu-alloc: [0] 0 1 Mar 6 03:01:20.889343 kernel: kvm-guest: PV spinlocks enabled Mar 6 03:01:20.889357 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 03:01:20.889374 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:01:20.889389 kernel: random: crng init done Mar 6 03:01:20.889403 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 03:01:20.889418 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 6 03:01:20.889436 kernel: Fallback order for Node 0: 0 Mar 6 03:01:20.889450 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Mar 6 03:01:20.889465 kernel: Policy zone: DMA32 Mar 6 03:01:20.889489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 03:01:20.889507 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 6 03:01:20.889521 kernel: Kernel/User page tables isolation: enabled Mar 6 03:01:20.889536 kernel: ftrace: allocating 40099 entries in 157 pages Mar 6 03:01:20.889551 kernel: ftrace: allocated 157 pages with 5 groups Mar 6 03:01:20.889566 kernel: Dynamic Preempt: voluntary Mar 6 03:01:20.889581 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 03:01:20.889597 kernel: rcu: RCU event tracing is enabled. Mar 6 03:01:20.889612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 6 03:01:20.889630 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 03:01:20.889645 kernel: Rude variant of Tasks RCU enabled. Mar 6 03:01:20.889660 kernel: Tracing variant of Tasks RCU enabled. Mar 6 03:01:20.889675 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 03:01:20.889690 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 6 03:01:20.889708 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:01:20.889723 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:01:20.889738 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:01:20.889753 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 6 03:01:20.889768 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 03:01:20.889783 kernel: Console: colour dummy device 80x25 Mar 6 03:01:20.889798 kernel: printk: legacy console [tty0] enabled Mar 6 03:01:20.889813 kernel: printk: legacy console [ttyS0] enabled Mar 6 03:01:20.889831 kernel: ACPI: Core revision 20240827 Mar 6 03:01:20.889846 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 6 03:01:20.889861 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 03:01:20.889876 kernel: x2apic enabled Mar 6 03:01:20.889891 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 03:01:20.889905 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 6 03:01:20.889921 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 6 03:01:20.889936 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 6 03:01:20.889950 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 6 03:01:20.889966 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 03:01:20.889983 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 03:01:20.889998 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 03:01:20.890013 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 6 03:01:20.890028 kernel: RETBleed: Vulnerable Mar 6 03:01:20.890042 kernel: Speculative Store Bypass: Vulnerable Mar 6 03:01:20.890056 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 03:01:20.890071 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 03:01:20.890085 kernel: GDS: Unknown: Dependent on hypervisor status Mar 6 03:01:20.890099 kernel: active return thunk: its_return_thunk Mar 6 03:01:20.890114 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 6 03:01:20.890129 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 03:01:20.890146 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 03:01:20.890161 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 03:01:20.890176 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 6 03:01:20.890188 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 6 03:01:20.890201 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 6 03:01:20.890214 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 6 03:01:20.890230 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 6 03:01:20.890243 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 6 03:01:20.890256 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 03:01:20.890269 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 6 03:01:20.890742 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 6 03:01:20.890761 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 6 03:01:20.890775 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 6 03:01:20.890790 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 6 03:01:20.890804 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 6 03:01:20.890820 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 6 03:01:20.890835 kernel: Freeing SMP alternatives memory: 32K Mar 6 03:01:20.890849 kernel: pid_max: default: 32768 minimum: 301 Mar 6 03:01:20.890864 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 03:01:20.890879 kernel: landlock: Up and running. Mar 6 03:01:20.890893 kernel: SELinux: Initializing. Mar 6 03:01:20.890908 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 6 03:01:20.890927 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 6 03:01:20.890942 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 6 03:01:20.890956 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 6 03:01:20.893065 kernel: signal: max sigframe size: 3632 Mar 6 03:01:20.893158 kernel: rcu: Hierarchical SRCU implementation. Mar 6 03:01:20.893175 kernel: rcu: Max phase no-delay instances is 400. Mar 6 03:01:20.893189 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 03:01:20.893204 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 03:01:20.893219 kernel: smp: Bringing up secondary CPUs ... Mar 6 03:01:20.893239 kernel: smpboot: x86: Booting SMP configuration: Mar 6 03:01:20.893255 kernel: .... node #0, CPUs: #1 Mar 6 03:01:20.893271 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 6 03:01:20.893308 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 6 03:01:20.893323 kernel: smp: Brought up 1 node, 2 CPUs Mar 6 03:01:20.893339 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 6 03:01:20.893354 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 133380K reserved, 0K cma-reserved) Mar 6 03:01:20.893370 kernel: devtmpfs: initialized Mar 6 03:01:20.893385 kernel: x86/mm: Memory block size: 128MB Mar 6 03:01:20.893404 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 6 03:01:20.893420 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 03:01:20.893436 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 6 03:01:20.893451 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 03:01:20.893466 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 03:01:20.893481 kernel: audit: initializing netlink subsys (disabled) Mar 6 03:01:20.893497 kernel: audit: type=2000 audit(1772766078.982:1): state=initialized audit_enabled=0 res=1 Mar 6 03:01:20.893512 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 03:01:20.893527 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 03:01:20.893545 kernel: cpuidle: using governor menu Mar 6 03:01:20.893560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 03:01:20.893575 kernel: dca service started, version 1.12.1 Mar 6 03:01:20.893590 kernel: PCI: Using configuration type 1 for base access Mar 6 03:01:20.893606 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 03:01:20.893622 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 03:01:20.893638 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 03:01:20.893653 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 03:01:20.893673 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 03:01:20.893688 kernel: ACPI: Added _OSI(Module Device) Mar 6 03:01:20.893704 kernel: ACPI: Added _OSI(Processor Device) Mar 6 03:01:20.893720 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 03:01:20.893736 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 6 03:01:20.893751 kernel: ACPI: Interpreter enabled Mar 6 03:01:20.893765 kernel: ACPI: PM: (supports S0 S5) Mar 6 03:01:20.893780 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 03:01:20.893795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 03:01:20.893809 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 03:01:20.893826 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 6 03:01:20.893841 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 03:01:20.894066 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 6 03:01:20.894195 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 6 03:01:20.894350 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 6 03:01:20.894368 kernel: acpiphp: Slot [3] registered Mar 6 03:01:20.894383 kernel: acpiphp: Slot [4] registered Mar 6 03:01:20.894401 kernel: acpiphp: Slot [5] registered Mar 6 03:01:20.894416 kernel: acpiphp: Slot [6] registered Mar 6 03:01:20.894430 kernel: acpiphp: Slot [7] registered Mar 6 03:01:20.894444 kernel: acpiphp: Slot [8] registered Mar 6 03:01:20.894458 kernel: acpiphp: Slot [9] registered Mar 6 03:01:20.894473 kernel: acpiphp: Slot [10] registered Mar 6 03:01:20.894487 kernel: acpiphp: Slot [11] registered Mar 6 03:01:20.894501 kernel: acpiphp: Slot [12] registered Mar 6 03:01:20.894516 kernel: acpiphp: Slot [13] registered Mar 6 03:01:20.894533 kernel: acpiphp: Slot [14] registered Mar 6 03:01:20.894547 kernel: acpiphp: Slot [15] registered Mar 6 03:01:20.894562 kernel: acpiphp: Slot [16] registered Mar 6 03:01:20.894576 kernel: acpiphp: Slot [17] registered Mar 6 03:01:20.894590 kernel: acpiphp: Slot [18] registered Mar 6 03:01:20.894604 kernel: acpiphp: Slot [19] registered Mar 6 03:01:20.894618 kernel: acpiphp: Slot [20] registered Mar 6 03:01:20.894632 kernel: acpiphp: Slot [21] registered Mar 6 03:01:20.894646 kernel: acpiphp: Slot [22] registered Mar 6 03:01:20.894660 kernel: acpiphp: Slot [23] registered Mar 6 03:01:20.894677 kernel: acpiphp: Slot [24] registered Mar 6 03:01:20.894691 kernel: acpiphp: Slot [25] registered Mar 6 03:01:20.894705 kernel: acpiphp: Slot [26] registered Mar 6 03:01:20.894719 kernel: acpiphp: Slot [27] registered Mar 6 03:01:20.894733 kernel: acpiphp: Slot [28] registered Mar 6 03:01:20.894747 kernel: acpiphp: Slot [29] registered Mar 6 03:01:20.894761 kernel: acpiphp: Slot [30] registered Mar 6 03:01:20.894775 kernel: acpiphp: Slot [31] registered Mar 6 03:01:20.894790 kernel: PCI host bridge to bus 0000:00 Mar 6 03:01:20.894921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 03:01:20.895033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 03:01:20.895143 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 03:01:20.895256 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 6 03:01:20.896484 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 6 03:01:20.896625 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 03:01:20.896791 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 6 03:01:20.896945 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Mar 6 03:01:20.897101 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Mar 6 03:01:20.897231 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 6 03:01:20.898148 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 6 03:01:20.898332 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 6 03:01:20.898472 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 6 03:01:20.898612 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 6 03:01:20.898746 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 6 03:01:20.898879 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 6 03:01:20.899037 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Mar 6 03:01:20.899170 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Mar 6 03:01:20.901368 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 6 03:01:20.901537 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 03:01:20.901682 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Mar 6 03:01:20.901811 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Mar 6 03:01:20.901944 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Mar 6 03:01:20.902070 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Mar 6 03:01:20.902089 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 03:01:20.902105 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 03:01:20.902120 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 03:01:20.902139 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 03:01:20.902154 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 6 03:01:20.902169 kernel: iommu: Default domain type: Translated Mar 6 03:01:20.902184 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 03:01:20.902199 kernel: efivars: Registered efivars operations Mar 6 03:01:20.902214 kernel: PCI: Using ACPI for IRQ routing Mar 6 03:01:20.902228 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 03:01:20.902242 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 6 03:01:20.902256 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 6 03:01:20.902272 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 6 03:01:20.906537 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 6 03:01:20.906684 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 6 03:01:20.906822 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 03:01:20.906842 kernel: vgaarb: loaded Mar 6 03:01:20.906859 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 6 03:01:20.906875 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 6 03:01:20.906891 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 03:01:20.906907 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 03:01:20.906929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 03:01:20.906945 kernel: pnp: PnP ACPI init Mar 6 03:01:20.906961 kernel: pnp: PnP ACPI: found 5 devices Mar 6 03:01:20.906977 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 03:01:20.906993 kernel: NET: Registered PF_INET protocol family Mar 6 03:01:20.907009 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 03:01:20.907025 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 6 03:01:20.907041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 03:01:20.907057 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 6 03:01:20.907076 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 6 03:01:20.907092 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 6 03:01:20.907109 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 6 03:01:20.907125 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 6 03:01:20.907141 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 03:01:20.907157 kernel: NET: Registered PF_XDP protocol family Mar 6 03:01:20.907361 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 03:01:20.907489 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 03:01:20.907609 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 03:01:20.907734 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 6 03:01:20.907854 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 6 03:01:20.907994 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 6 03:01:20.908015 kernel: PCI: CLS 0 bytes, default 64 Mar 6 03:01:20.908032 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 6 03:01:20.908048 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 6 03:01:20.908065 kernel: clocksource: Switched to clocksource tsc Mar 6 03:01:20.908080 kernel: Initialise system trusted keyrings Mar 6 03:01:20.908100 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 6 03:01:20.908116 kernel: Key type asymmetric registered Mar 6 03:01:20.908131 kernel: Asymmetric key parser 'x509' registered Mar 6 03:01:20.908147 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 6 03:01:20.908163 kernel: io scheduler mq-deadline registered Mar 6 03:01:20.908179 kernel: io scheduler kyber registered Mar 6 03:01:20.908195 kernel: io scheduler bfq registered Mar 6 03:01:20.908211 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 03:01:20.908227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 03:01:20.908245 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 03:01:20.908261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 03:01:20.909515 kernel: i8042: Warning: Keylock active Mar 6 03:01:20.909542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 03:01:20.909559 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 03:01:20.909741 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 6 03:01:20.909873 kernel: rtc_cmos 00:00: registered as rtc0 Mar 6 03:01:20.909998 kernel: rtc_cmos 00:00: setting system clock to 2026-03-06T03:01:20 UTC (1772766080) Mar 6 03:01:20.910128 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 6 03:01:20.910174 kernel: intel_pstate: CPU model not supported Mar 6 03:01:20.910194 kernel: efifb: probing for efifb Mar 6 03:01:20.910211 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 6 03:01:20.910229 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 6 03:01:20.910246 kernel: efifb: scrolling: redraw Mar 6 03:01:20.910263 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 6 03:01:20.913315 kernel: Console: switching to colour frame buffer device 100x37 Mar 6 03:01:20.913361 kernel: fb0: EFI VGA frame buffer device Mar 6 03:01:20.913379 kernel: pstore: Using crash dump compression: deflate Mar 6 03:01:20.913396 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 03:01:20.913413 kernel: NET: Registered PF_INET6 protocol family Mar 6 03:01:20.913430 kernel: Segment Routing with IPv6 Mar 6 03:01:20.913447 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 03:01:20.913463 kernel: NET: Registered PF_PACKET protocol family Mar 6 03:01:20.913480 kernel: Key type dns_resolver registered Mar 6 03:01:20.913497 kernel: IPI shorthand broadcast: enabled Mar 6 03:01:20.913513 kernel: sched_clock: Marking stable (2574067406, 145490345)->(2787774242, -68216491) Mar 6 03:01:20.913533 kernel: registered taskstats version 1 Mar 6 03:01:20.913550 kernel: Loading compiled-in X.509 certificates Mar 6 03:01:20.913567 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 30893fe9fd219d26109af079e6493e1c8b1c00af' Mar 6 03:01:20.913584 kernel: Demotion targets for Node 0: null Mar 6 03:01:20.913601 kernel: Key type .fscrypt registered Mar 6 03:01:20.913617 kernel: Key type fscrypt-provisioning registered Mar 6 03:01:20.913634 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 03:01:20.913651 kernel: ima: Allocated hash algorithm: sha1 Mar 6 03:01:20.913668 kernel: ima: No architecture policies found Mar 6 03:01:20.913688 kernel: clk: Disabling unused clocks Mar 6 03:01:20.913705 kernel: Warning: unable to open an initial console. Mar 6 03:01:20.913722 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 6 03:01:20.913740 kernel: Write protecting the kernel read-only data: 40960k Mar 6 03:01:20.913760 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 6 03:01:20.913780 kernel: Run /init as init process Mar 6 03:01:20.913797 kernel: with arguments: Mar 6 03:01:20.913814 kernel: /init Mar 6 03:01:20.913830 kernel: with environment: Mar 6 03:01:20.913846 kernel: HOME=/ Mar 6 03:01:20.913863 kernel: TERM=linux Mar 6 03:01:20.913883 systemd[1]: Successfully made /usr/ read-only. Mar 6 03:01:20.913905 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 03:01:20.913927 systemd[1]: Detected virtualization amazon. Mar 6 03:01:20.913944 systemd[1]: Detected architecture x86-64. Mar 6 03:01:20.913961 systemd[1]: Running in initrd. Mar 6 03:01:20.913978 systemd[1]: No hostname configured, using default hostname. Mar 6 03:01:20.913996 systemd[1]: Hostname set to . Mar 6 03:01:20.914013 systemd[1]: Initializing machine ID from VM UUID. Mar 6 03:01:20.914030 systemd[1]: Queued start job for default target initrd.target. Mar 6 03:01:20.914048 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:01:20.914069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:01:20.914088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 03:01:20.914106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 03:01:20.914123 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 03:01:20.914143 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 03:01:20.914162 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 03:01:20.914183 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 03:01:20.914200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:01:20.914217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:01:20.914235 systemd[1]: Reached target paths.target - Path Units. Mar 6 03:01:20.914255 systemd[1]: Reached target slices.target - Slice Units. Mar 6 03:01:20.914273 systemd[1]: Reached target swap.target - Swaps. Mar 6 03:01:20.914931 systemd[1]: Reached target timers.target - Timer Units. Mar 6 03:01:20.914950 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 03:01:20.914968 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 03:01:20.914990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 03:01:20.915006 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 03:01:20.915024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:01:20.915041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 03:01:20.915055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:01:20.915072 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 03:01:20.915089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 03:01:20.915105 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 03:01:20.915122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 03:01:20.915144 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 03:01:20.915161 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 03:01:20.915178 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 03:01:20.915195 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 03:01:20.915212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:20.915228 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 03:01:20.915310 systemd-journald[188]: Collecting audit messages is disabled. Mar 6 03:01:20.915351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:01:20.915374 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 03:01:20.915391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 03:01:20.915410 systemd-journald[188]: Journal started Mar 6 03:01:20.915445 systemd-journald[188]: Runtime Journal (/run/log/journal/ec24613014daece791dfd7aff93000f5) is 4.7M, max 38.1M, 33.3M free. Mar 6 03:01:20.899330 systemd-modules-load[190]: Inserted module 'overlay' Mar 6 03:01:20.922314 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 03:01:20.929007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:20.943448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 03:01:20.944819 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 03:01:20.949447 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 03:01:20.952228 kernel: Bridge firewalling registered Mar 6 03:01:20.950030 systemd-modules-load[190]: Inserted module 'br_netfilter' Mar 6 03:01:20.954676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 03:01:20.965468 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:01:20.971474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 03:01:20.978449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 03:01:20.978899 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 03:01:20.989810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:01:21.000346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:01:21.004087 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 03:01:21.008010 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 03:01:21.004904 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:01:21.010369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 03:01:21.015444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 03:01:21.042413 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:01:21.070836 systemd-resolved[229]: Positive Trust Anchors: Mar 6 03:01:21.071853 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 03:01:21.071921 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 03:01:21.079063 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 6 03:01:21.082182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 03:01:21.082880 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:01:21.137325 kernel: SCSI subsystem initialized Mar 6 03:01:21.147309 kernel: Loading iSCSI transport class v2.0-870. Mar 6 03:01:21.158308 kernel: iscsi: registered transport (tcp) Mar 6 03:01:21.179464 kernel: iscsi: registered transport (qla4xxx) Mar 6 03:01:21.179545 kernel: QLogic iSCSI HBA Driver Mar 6 03:01:21.198128 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 03:01:21.219132 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:01:21.222087 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 03:01:21.267827 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 03:01:21.270029 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 03:01:21.322334 kernel: raid6: avx512x4 gen() 17934 MB/s Mar 6 03:01:21.340308 kernel: raid6: avx512x2 gen() 17909 MB/s Mar 6 03:01:21.358323 kernel: raid6: avx512x1 gen() 17891 MB/s Mar 6 03:01:21.376306 kernel: raid6: avx2x4 gen() 17761 MB/s Mar 6 03:01:21.394311 kernel: raid6: avx2x2 gen() 17787 MB/s Mar 6 03:01:21.412588 kernel: raid6: avx2x1 gen() 13536 MB/s Mar 6 03:01:21.412649 kernel: raid6: using algorithm avx512x4 gen() 17934 MB/s Mar 6 03:01:21.431587 kernel: raid6: .... xor() 7624 MB/s, rmw enabled Mar 6 03:01:21.431656 kernel: raid6: using avx512x2 recovery algorithm Mar 6 03:01:21.452312 kernel: xor: automatically using best checksumming function avx Mar 6 03:01:21.620315 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 03:01:21.627036 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 03:01:21.629110 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:01:21.658652 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 6 03:01:21.665383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:01:21.670503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 03:01:21.699923 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 6 03:01:21.703625 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 03:01:21.728020 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 03:01:21.730002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 03:01:21.787364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:01:21.792179 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 03:01:21.891311 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 03:01:21.895028 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 6 03:01:21.895350 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 6 03:01:21.907319 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 6 03:01:21.911298 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 6 03:01:21.911550 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 6 03:01:21.914251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:01:21.915090 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:21.917389 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:21.920571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:21.924273 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:1d:01:23:10:fb Mar 6 03:01:21.926056 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:01:21.934495 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 6 03:01:21.939895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:01:21.948363 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 03:01:21.948395 kernel: GPT:9289727 != 33554431 Mar 6 03:01:21.948427 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 03:01:21.948446 kernel: GPT:9289727 != 33554431 Mar 6 03:01:21.948466 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 03:01:21.948488 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:01:21.940023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:21.952444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:21.954926 (udev-worker)[489]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:01:21.962342 kernel: AES CTR mode by8 optimization enabled Mar 6 03:01:22.005826 kernel: nvme nvme0: using unchecked data buffer Mar 6 03:01:22.008231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:22.116550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 03:01:22.151563 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 6 03:01:22.152499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 03:01:22.164869 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 6 03:01:22.182007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 6 03:01:22.182569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 6 03:01:22.183884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 03:01:22.184937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:01:22.186308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 03:01:22.187967 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 03:01:22.190424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 03:01:22.214185 disk-uuid[672]: Primary Header is updated. Mar 6 03:01:22.214185 disk-uuid[672]: Secondary Entries is updated. Mar 6 03:01:22.214185 disk-uuid[672]: Secondary Header is updated. Mar 6 03:01:22.243643 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:01:22.244168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 03:01:23.270824 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:01:23.270950 disk-uuid[673]: The operation has completed successfully. Mar 6 03:01:23.425155 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 03:01:23.425297 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 03:01:23.455698 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 03:01:23.474593 sh[940]: Success Mar 6 03:01:23.500870 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 03:01:23.500976 kernel: device-mapper: uevent: version 1.0.3 Mar 6 03:01:23.501013 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 03:01:23.514311 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 6 03:01:23.600839 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 03:01:23.605394 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 03:01:23.616605 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 03:01:23.631316 kernel: BTRFS: device fsid 1235dd15-5252-4928-9c6c-372370c6bfca devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (963) Mar 6 03:01:23.636027 kernel: BTRFS info (device dm-0): first mount of filesystem 1235dd15-5252-4928-9c6c-372370c6bfca Mar 6 03:01:23.636103 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:23.723520 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 6 03:01:23.723604 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 03:01:23.726165 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 03:01:23.737616 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 03:01:23.738777 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 03:01:23.739758 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 03:01:23.740799 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 03:01:23.743534 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 03:01:23.780506 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (998) Mar 6 03:01:23.786443 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:23.786522 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:23.794405 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 03:01:23.794484 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 03:01:23.802336 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:23.803959 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 03:01:23.807108 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 03:01:23.851866 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 03:01:23.855006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 03:01:23.894134 systemd-networkd[1132]: lo: Link UP Mar 6 03:01:23.894148 systemd-networkd[1132]: lo: Gained carrier Mar 6 03:01:23.895976 systemd-networkd[1132]: Enumeration completed Mar 6 03:01:23.896113 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 03:01:23.896749 systemd-networkd[1132]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:23.896755 systemd-networkd[1132]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 03:01:23.897469 systemd[1]: Reached target network.target - Network. Mar 6 03:01:23.900587 systemd-networkd[1132]: eth0: Link UP Mar 6 03:01:23.900593 systemd-networkd[1132]: eth0: Gained carrier Mar 6 03:01:23.900611 systemd-networkd[1132]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:23.918374 systemd-networkd[1132]: eth0: DHCPv4 address 172.31.19.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 03:01:24.237513 ignition[1073]: Ignition 2.22.0 Mar 6 03:01:24.237528 ignition[1073]: Stage: fetch-offline Mar 6 03:01:24.237776 ignition[1073]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:24.237787 ignition[1073]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:24.238184 ignition[1073]: Ignition finished successfully Mar 6 03:01:24.240980 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 03:01:24.242551 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 6 03:01:24.272488 ignition[1141]: Ignition 2.22.0 Mar 6 03:01:24.272504 ignition[1141]: Stage: fetch Mar 6 03:01:24.272883 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:24.272895 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:24.273015 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:24.298822 ignition[1141]: PUT result: OK Mar 6 03:01:24.301765 ignition[1141]: parsed url from cmdline: "" Mar 6 03:01:24.301777 ignition[1141]: no config URL provided Mar 6 03:01:24.301787 ignition[1141]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 03:01:24.301801 ignition[1141]: no config at "/usr/lib/ignition/user.ign" Mar 6 03:01:24.301826 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:24.302878 ignition[1141]: PUT result: OK Mar 6 03:01:24.302935 ignition[1141]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 6 03:01:24.303995 ignition[1141]: GET result: OK Mar 6 03:01:24.304164 ignition[1141]: parsing config with SHA512: 2a24bafb0814815caa598b203534c26c9edd55c871c0a4bb903804f4880f387407a2863332699ef00db576fcb189d335172e3b67cfbdf9a5253e3bbcf770f3c4 Mar 6 03:01:24.309764 unknown[1141]: fetched base config from "system" Mar 6 03:01:24.310459 unknown[1141]: fetched base config from "system" Mar 6 03:01:24.310959 unknown[1141]: fetched user config from "aws" Mar 6 03:01:24.311708 ignition[1141]: fetch: fetch complete Mar 6 03:01:24.311715 ignition[1141]: fetch: fetch passed Mar 6 03:01:24.311778 ignition[1141]: Ignition finished successfully Mar 6 03:01:24.314885 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 6 03:01:24.316347 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 03:01:24.352782 ignition[1147]: Ignition 2.22.0 Mar 6 03:01:24.352799 ignition[1147]: Stage: kargs Mar 6 03:01:24.353272 ignition[1147]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:24.353311 ignition[1147]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:24.353431 ignition[1147]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:24.355093 ignition[1147]: PUT result: OK Mar 6 03:01:24.358246 ignition[1147]: kargs: kargs passed Mar 6 03:01:24.358342 ignition[1147]: Ignition finished successfully Mar 6 03:01:24.360822 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 03:01:24.362381 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 03:01:24.391961 ignition[1153]: Ignition 2.22.0 Mar 6 03:01:24.391980 ignition[1153]: Stage: disks Mar 6 03:01:24.392384 ignition[1153]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:24.392397 ignition[1153]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:24.392517 ignition[1153]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:24.393705 ignition[1153]: PUT result: OK Mar 6 03:01:24.396543 ignition[1153]: disks: disks passed Mar 6 03:01:24.396614 ignition[1153]: Ignition finished successfully Mar 6 03:01:24.398655 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 03:01:24.399252 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 03:01:24.399627 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 03:01:24.400139 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 03:01:24.400719 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 03:01:24.401368 systemd[1]: Reached target basic.target - Basic System. Mar 6 03:01:24.403002 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 03:01:24.447136 systemd-fsck[1161]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 03:01:24.451121 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 03:01:24.452804 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 03:01:24.597309 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 16ab7223-a8af-43d2-ad40-7e1bf0ff2a89 r/w with ordered data mode. Quota mode: none. Mar 6 03:01:24.597856 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 03:01:24.598889 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 03:01:24.600948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 03:01:24.603463 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 03:01:24.605887 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 03:01:24.606454 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 03:01:24.606491 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 03:01:24.615193 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 03:01:24.617536 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 03:01:24.630483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1180) Mar 6 03:01:24.630548 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:24.635081 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:24.642115 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 03:01:24.642197 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 03:01:24.643733 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 03:01:24.978708 initrd-setup-root[1204]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 03:01:24.995025 initrd-setup-root[1211]: cut: /sysroot/etc/group: No such file or directory Mar 6 03:01:25.000310 initrd-setup-root[1218]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 03:01:25.005046 initrd-setup-root[1225]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 03:01:25.300438 systemd-networkd[1132]: eth0: Gained IPv6LL Mar 6 03:01:25.329896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 03:01:25.332008 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 03:01:25.335458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 03:01:25.350655 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 03:01:25.353578 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:25.379802 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 03:01:25.389395 ignition[1292]: INFO : Ignition 2.22.0 Mar 6 03:01:25.389395 ignition[1292]: INFO : Stage: mount Mar 6 03:01:25.391001 ignition[1292]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:25.391001 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:25.391001 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:25.391001 ignition[1292]: INFO : PUT result: OK Mar 6 03:01:25.393978 ignition[1292]: INFO : mount: mount passed Mar 6 03:01:25.394552 ignition[1292]: INFO : Ignition finished successfully Mar 6 03:01:25.396101 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 03:01:25.397731 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 03:01:25.423211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 03:01:25.452314 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1305) Mar 6 03:01:25.455311 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:25.455377 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:25.463673 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 03:01:25.463754 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 03:01:25.465847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 03:01:25.500201 ignition[1322]: INFO : Ignition 2.22.0 Mar 6 03:01:25.500201 ignition[1322]: INFO : Stage: files Mar 6 03:01:25.501699 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:25.501699 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:25.501699 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:25.502935 ignition[1322]: INFO : PUT result: OK Mar 6 03:01:25.504504 ignition[1322]: DEBUG : files: compiled without relabeling support, skipping Mar 6 03:01:25.505357 ignition[1322]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 03:01:25.505357 ignition[1322]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 03:01:25.518631 ignition[1322]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 03:01:25.519813 ignition[1322]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 03:01:25.519813 ignition[1322]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 03:01:25.519272 unknown[1322]: wrote ssh authorized keys file for user: core Mar 6 03:01:25.522720 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 03:01:25.522720 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 03:01:25.589337 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 03:01:25.766247 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 03:01:25.767468 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 03:01:25.773632 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 03:01:25.773632 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 03:01:25.773632 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 03:01:25.776226 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 03:01:25.776226 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 03:01:25.776226 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 6 03:01:26.329410 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 03:01:27.631152 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 03:01:27.631152 ignition[1322]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 03:01:27.633599 ignition[1322]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 03:01:27.637198 ignition[1322]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 03:01:27.637198 ignition[1322]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 03:01:27.637198 ignition[1322]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 6 03:01:27.642889 ignition[1322]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 03:01:27.642889 ignition[1322]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 03:01:27.642889 ignition[1322]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 03:01:27.642889 ignition[1322]: INFO : files: files passed Mar 6 03:01:27.642889 ignition[1322]: INFO : Ignition finished successfully Mar 6 03:01:27.639753 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 03:01:27.641798 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 03:01:27.647697 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 03:01:27.656845 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 03:01:27.662507 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 03:01:27.672439 initrd-setup-root-after-ignition[1352]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:01:27.674213 initrd-setup-root-after-ignition[1352]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:01:27.676077 initrd-setup-root-after-ignition[1356]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:01:27.677359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 03:01:27.678455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 03:01:27.680136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 03:01:27.734027 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 03:01:27.734150 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 03:01:27.735007 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 03:01:27.735748 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 03:01:27.737151 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 03:01:27.738353 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 03:01:27.778742 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 03:01:27.780900 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 03:01:27.805638 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:01:27.806410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:01:27.807477 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 03:01:27.808337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 03:01:27.808572 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 03:01:27.809791 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 03:01:27.810608 systemd[1]: Stopped target basic.target - Basic System. Mar 6 03:01:27.811435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 03:01:27.812175 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 03:01:27.812959 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 03:01:27.813808 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 03:01:27.814646 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 03:01:27.815336 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 03:01:27.816152 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 03:01:27.817452 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 03:01:27.818234 systemd[1]: Stopped target swap.target - Swaps. Mar 6 03:01:27.818983 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 03:01:27.819204 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 03:01:27.820266 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:01:27.821260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:01:27.821871 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 03:01:27.822032 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:01:27.822705 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 03:01:27.822955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 03:01:27.824245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 03:01:27.824519 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 03:01:27.825324 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 03:01:27.825477 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 03:01:27.828334 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 03:01:27.829204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 03:01:27.829453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:01:27.832768 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 03:01:27.834061 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 03:01:27.834937 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:01:27.836383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 03:01:27.837177 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 03:01:27.844847 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 03:01:27.845798 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 03:01:27.868894 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 03:01:27.871364 ignition[1376]: INFO : Ignition 2.22.0 Mar 6 03:01:27.871364 ignition[1376]: INFO : Stage: umount Mar 6 03:01:27.873350 ignition[1376]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:27.873350 ignition[1376]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:01:27.873350 ignition[1376]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:01:27.873350 ignition[1376]: INFO : PUT result: OK Mar 6 03:01:27.876569 ignition[1376]: INFO : umount: umount passed Mar 6 03:01:27.877356 ignition[1376]: INFO : Ignition finished successfully Mar 6 03:01:27.878525 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 03:01:27.879041 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 03:01:27.880214 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 03:01:27.880343 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 03:01:27.881464 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 03:01:27.881529 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 03:01:27.881961 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 6 03:01:27.882020 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 6 03:01:27.882507 systemd[1]: Stopped target network.target - Network. Mar 6 03:01:27.883073 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 03:01:27.883138 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 03:01:27.883782 systemd[1]: Stopped target paths.target - Path Units. Mar 6 03:01:27.884391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 03:01:27.888345 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:01:27.888715 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 03:01:27.889760 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 03:01:27.890453 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 03:01:27.890513 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 03:01:27.891076 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 03:01:27.891127 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 03:01:27.891686 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 03:01:27.891765 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 03:01:27.892334 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 03:01:27.892394 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 03:01:27.894056 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 03:01:27.894578 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 03:01:27.898194 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 03:01:27.898491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 03:01:27.902003 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 03:01:27.902876 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 03:01:27.902956 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:01:27.905903 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:01:27.906462 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 03:01:27.906608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 03:01:27.908514 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 03:01:27.909365 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 03:01:27.909972 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 03:01:27.910022 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:01:27.911729 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 03:01:27.913593 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 03:01:27.913661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 03:01:27.917996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 03:01:27.918072 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:01:27.919094 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 03:01:27.919155 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 03:01:27.919811 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:01:27.926735 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 03:01:27.941239 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 03:01:27.942126 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:01:27.943582 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 03:01:27.943679 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 03:01:27.944733 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 03:01:27.944777 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:01:27.946387 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 03:01:27.946459 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 03:01:27.947319 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 03:01:27.947405 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 03:01:27.948494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 03:01:27.948560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 03:01:27.952429 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 03:01:27.953069 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 03:01:27.953166 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:01:27.956289 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 03:01:27.956922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:01:27.958062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:01:27.958125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:27.960272 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 03:01:27.963439 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 03:01:27.971402 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 03:01:27.971523 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 03:01:28.034097 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 03:01:28.034249 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 03:01:28.035639 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 03:01:28.036205 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 03:01:28.036358 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 03:01:28.038181 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 03:01:28.067247 systemd[1]: Switching root. Mar 6 03:01:28.122971 systemd-journald[188]: Journal stopped Mar 6 03:01:29.818213 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 6 03:01:29.818315 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 03:01:29.818349 kernel: SELinux: policy capability open_perms=1 Mar 6 03:01:29.818373 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 03:01:29.818394 kernel: SELinux: policy capability always_check_network=0 Mar 6 03:01:29.818414 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 03:01:29.818442 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 03:01:29.818464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 03:01:29.818486 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 03:01:29.818507 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 03:01:29.818528 kernel: audit: type=1403 audit(1772766088.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 03:01:29.818555 systemd[1]: Successfully loaded SELinux policy in 100.315ms. Mar 6 03:01:29.818582 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.303ms. Mar 6 03:01:29.818607 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 03:01:29.818630 systemd[1]: Detected virtualization amazon. Mar 6 03:01:29.818658 systemd[1]: Detected architecture x86-64. Mar 6 03:01:29.818681 systemd[1]: Detected first boot. Mar 6 03:01:29.818704 systemd[1]: Initializing machine ID from VM UUID. Mar 6 03:01:29.818726 zram_generator::config[1419]: No configuration found. Mar 6 03:01:29.818750 kernel: Guest personality initialized and is inactive Mar 6 03:01:29.818774 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 6 03:01:29.818791 kernel: Initialized host personality Mar 6 03:01:29.818810 kernel: NET: Registered PF_VSOCK protocol family Mar 6 03:01:29.818831 systemd[1]: Populated /etc with preset unit settings. Mar 6 03:01:29.818852 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 03:01:29.818872 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 03:01:29.818892 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 03:01:29.818911 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 03:01:29.818934 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 03:01:29.818954 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 03:01:29.818973 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 03:01:29.818993 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 03:01:29.819016 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 03:01:29.819037 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 03:01:29.819059 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 03:01:29.819081 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 03:01:29.819103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:01:29.819129 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:01:29.819149 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 03:01:29.819174 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 03:01:29.819194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 03:01:29.819214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 03:01:29.819234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 03:01:29.819253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:01:29.819893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:01:29.819943 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 03:01:29.819965 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 03:01:29.819986 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 03:01:29.820006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 03:01:29.820027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:01:29.820047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 03:01:29.820068 systemd[1]: Reached target slices.target - Slice Units. Mar 6 03:01:29.820088 systemd[1]: Reached target swap.target - Swaps. Mar 6 03:01:29.820108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 03:01:29.820131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 03:01:29.820151 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 03:01:29.820172 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:01:29.820192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 03:01:29.820213 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:01:29.820233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 03:01:29.820254 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 03:01:29.820294 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 03:01:29.820313 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 03:01:29.820337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:29.820357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 03:01:29.820385 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 03:01:29.820405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 03:01:29.820427 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 03:01:29.820448 systemd[1]: Reached target machines.target - Containers. Mar 6 03:01:29.820468 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 03:01:29.820489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:01:29.820513 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 03:01:29.820534 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 03:01:29.820553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 03:01:29.820570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 03:01:29.820588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 03:01:29.820606 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 03:01:29.820625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 03:01:29.820644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 03:01:29.820662 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 03:01:29.820684 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 03:01:29.820704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 03:01:29.820724 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 03:01:29.820744 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:01:29.820763 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 03:01:29.822342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 03:01:29.822369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 03:01:29.822390 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 03:01:29.822416 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 03:01:29.822437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 03:01:29.822456 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 03:01:29.822474 systemd[1]: Stopped verity-setup.service. Mar 6 03:01:29.822494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:29.822517 kernel: fuse: init (API version 7.41) Mar 6 03:01:29.822575 systemd-journald[1498]: Collecting audit messages is disabled. Mar 6 03:01:29.822622 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 03:01:29.822643 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 03:01:29.822667 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 03:01:29.822687 systemd-journald[1498]: Journal started Mar 6 03:01:29.822726 systemd-journald[1498]: Runtime Journal (/run/log/journal/ec24613014daece791dfd7aff93000f5) is 4.7M, max 38.1M, 33.3M free. Mar 6 03:01:29.493785 systemd[1]: Queued start job for default target multi-user.target. Mar 6 03:01:29.517735 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 6 03:01:29.518191 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 03:01:29.827313 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 03:01:29.842976 kernel: loop: module loaded Mar 6 03:01:29.830243 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 03:01:29.832187 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 03:01:29.835240 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 03:01:29.838787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:01:29.839832 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 03:01:29.840064 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 03:01:29.842834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 03:01:29.846843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 03:01:29.848753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 03:01:29.849533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 03:01:29.857931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 03:01:29.860606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 03:01:29.861743 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 03:01:29.861976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 03:01:29.863038 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 03:01:29.864018 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 03:01:29.885395 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 03:01:29.891000 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 03:01:29.891825 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 03:01:29.891990 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 03:01:29.894236 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 03:01:29.902116 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 03:01:29.902909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:01:29.907461 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 03:01:29.910500 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 03:01:29.913396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 03:01:29.919530 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 03:01:29.920228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 03:01:29.930765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 03:01:29.937231 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 03:01:29.943267 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:01:29.944342 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 03:01:29.945267 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 03:01:29.946970 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 03:01:29.953691 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 03:01:29.967852 systemd-journald[1498]: Time spent on flushing to /var/log/journal/ec24613014daece791dfd7aff93000f5 is 147.480ms for 1010 entries. Mar 6 03:01:29.967852 systemd-journald[1498]: System Journal (/var/log/journal/ec24613014daece791dfd7aff93000f5) is 8M, max 195.6M, 187.6M free. Mar 6 03:01:30.132072 systemd-journald[1498]: Received client request to flush runtime journal. Mar 6 03:01:30.132152 kernel: ACPI: bus type drm_connector registered Mar 6 03:01:30.132191 kernel: loop0: detected capacity change from 0 to 72368 Mar 6 03:01:30.132219 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 03:01:29.973011 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 03:01:29.981327 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 03:01:29.994500 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 03:01:30.027504 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 03:01:30.029145 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 03:01:30.029865 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 03:01:30.044770 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 03:01:30.073893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:01:30.102662 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:01:30.122539 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 03:01:30.139852 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 03:01:30.145519 kernel: loop1: detected capacity change from 0 to 219192 Mar 6 03:01:30.192803 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 03:01:30.195582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 03:01:30.250736 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Mar 6 03:01:30.251197 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Mar 6 03:01:30.257698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:01:30.520304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 03:01:30.561207 kernel: loop2: detected capacity change from 0 to 128560 Mar 6 03:01:30.717368 kernel: loop3: detected capacity change from 0 to 110984 Mar 6 03:01:30.835808 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 03:01:30.838049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:01:30.880438 systemd-udevd[1580]: Using default interface naming scheme 'v255'. Mar 6 03:01:30.886312 kernel: loop4: detected capacity change from 0 to 72368 Mar 6 03:01:30.908319 kernel: loop5: detected capacity change from 0 to 219192 Mar 6 03:01:30.937310 kernel: loop6: detected capacity change from 0 to 128560 Mar 6 03:01:30.952777 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:01:30.958107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 03:01:30.974979 kernel: loop7: detected capacity change from 0 to 110984 Mar 6 03:01:31.003215 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 03:01:31.004542 (sd-merge)[1582]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 6 03:01:31.005723 (sd-merge)[1582]: Merged extensions into '/usr'. Mar 6 03:01:31.015410 systemd[1]: Reload requested from client PID 1546 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 03:01:31.015432 systemd[1]: Reloading... Mar 6 03:01:31.202300 zram_generator::config[1639]: No configuration found. Mar 6 03:01:31.425900 systemd-networkd[1587]: lo: Link UP Mar 6 03:01:31.425916 systemd-networkd[1587]: lo: Gained carrier Mar 6 03:01:31.426843 systemd-networkd[1587]: Enumeration completed Mar 6 03:01:31.434981 (udev-worker)[1593]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:01:31.577299 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 6 03:01:31.577457 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 03:01:31.599706 kernel: ACPI: button: Power Button [PWRF] Mar 6 03:01:31.604308 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 6 03:01:31.610313 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 6 03:01:31.646302 kernel: ACPI: button: Sleep Button [SLPF] Mar 6 03:01:31.665069 systemd-networkd[1587]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:31.665082 systemd-networkd[1587]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 03:01:31.671648 systemd-networkd[1587]: eth0: Link UP Mar 6 03:01:31.671832 systemd-networkd[1587]: eth0: Gained carrier Mar 6 03:01:31.671862 systemd-networkd[1587]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:31.681357 systemd-networkd[1587]: eth0: DHCPv4 address 172.31.19.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 03:01:31.738300 ldconfig[1537]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 03:01:31.791380 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 03:01:31.792197 systemd[1]: Reloading finished in 776 ms. Mar 6 03:01:31.814455 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 03:01:31.815316 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 03:01:31.817230 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 03:01:31.818803 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 03:01:31.868879 systemd[1]: Starting ensure-sysext.service... Mar 6 03:01:31.871507 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 03:01:31.876718 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 03:01:31.881705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 03:01:31.924878 systemd[1]: Reload requested from client PID 1785 ('systemctl') (unit ensure-sysext.service)... Mar 6 03:01:31.925054 systemd[1]: Reloading... Mar 6 03:01:31.948834 systemd-tmpfiles[1791]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 03:01:31.948880 systemd-tmpfiles[1791]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 03:01:31.949294 systemd-tmpfiles[1791]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 03:01:31.949712 systemd-tmpfiles[1791]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 03:01:31.951606 systemd-tmpfiles[1791]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 03:01:31.952177 systemd-tmpfiles[1791]: ACLs are not supported, ignoring. Mar 6 03:01:31.953424 systemd-tmpfiles[1791]: ACLs are not supported, ignoring. Mar 6 03:01:31.959211 systemd-tmpfiles[1791]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 03:01:31.960333 systemd-tmpfiles[1791]: Skipping /boot Mar 6 03:01:31.980227 systemd-tmpfiles[1791]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 03:01:31.980449 systemd-tmpfiles[1791]: Skipping /boot Mar 6 03:01:32.126300 zram_generator::config[1831]: No configuration found. Mar 6 03:01:32.385950 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 03:01:32.387173 systemd[1]: Reloading finished in 461 ms. Mar 6 03:01:32.425097 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 03:01:32.426345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:01:32.460943 systemd[1]: Finished ensure-sysext.service. Mar 6 03:01:32.470900 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:32.472191 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 03:01:32.477453 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 03:01:32.478335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:01:32.487300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 03:01:32.491570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 03:01:32.494581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 03:01:32.497502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 03:01:32.498524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:01:32.500458 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 03:01:32.501600 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:01:32.504513 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 03:01:32.509874 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 03:01:32.510582 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 03:01:32.521085 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 03:01:32.531385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:32.532102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:32.548935 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 03:01:32.551720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 03:01:32.575774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 03:01:32.576207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 03:01:32.579131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 03:01:32.586862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 03:01:32.588192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 03:01:32.599564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 03:01:32.602006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 03:01:32.612671 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 03:01:32.613470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 03:01:32.614790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 03:01:32.634781 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 03:01:32.639439 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 03:01:32.664118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 03:01:32.666127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 03:01:32.668823 augenrules[1929]: No rules Mar 6 03:01:32.670128 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 03:01:32.671368 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 03:01:32.674229 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 03:01:32.696715 systemd-resolved[1895]: Positive Trust Anchors: Mar 6 03:01:32.696733 systemd-resolved[1895]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 03:01:32.696781 systemd-resolved[1895]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 03:01:32.702261 systemd-resolved[1895]: Defaulting to hostname 'linux'. Mar 6 03:01:32.704199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 03:01:32.704857 systemd[1]: Reached target network.target - Network. Mar 6 03:01:32.705423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:01:32.730458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:32.731155 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 03:01:32.731763 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 03:01:32.732207 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 03:01:32.732680 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 6 03:01:32.733300 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 03:01:32.733763 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 03:01:32.734125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 03:01:32.734500 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 03:01:32.734550 systemd[1]: Reached target paths.target - Path Units. Mar 6 03:01:32.734900 systemd[1]: Reached target timers.target - Timer Units. Mar 6 03:01:32.736811 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 03:01:32.738635 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 03:01:32.741298 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 03:01:32.741865 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 03:01:32.742260 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 03:01:32.744878 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 03:01:32.745708 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 03:01:32.746809 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 03:01:32.748046 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 03:01:32.748454 systemd[1]: Reached target basic.target - Basic System. Mar 6 03:01:32.748866 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 03:01:32.748903 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 03:01:32.749990 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 03:01:32.754449 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 6 03:01:32.758608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 03:01:32.761314 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 03:01:32.767409 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 03:01:32.770866 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 03:01:32.772919 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 03:01:32.775597 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 6 03:01:32.779490 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 03:01:32.783505 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 03:01:32.791888 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 03:01:32.806185 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 6 03:01:32.810565 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 03:01:32.831632 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 03:01:32.837358 jq[1945]: false Mar 6 03:01:32.842042 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 03:01:32.845155 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 03:01:32.846983 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 03:01:32.848839 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 03:01:32.853752 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 03:01:32.859024 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 03:01:32.860781 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 03:01:32.861065 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 03:01:32.862360 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 03:01:32.862636 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 03:01:32.867025 extend-filesystems[1946]: Found /dev/nvme0n1p6 Mar 6 03:01:32.901055 extend-filesystems[1946]: Found /dev/nvme0n1p9 Mar 6 03:01:32.917323 extend-filesystems[1946]: Checking size of /dev/nvme0n1p9 Mar 6 03:01:32.916790 systemd-networkd[1587]: eth0: Gained IPv6LL Mar 6 03:01:32.922085 oslogin_cache_refresh[1947]: Refreshing passwd entry cache Mar 6 03:01:32.928407 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Refreshing passwd entry cache Mar 6 03:01:32.934813 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 03:01:32.935603 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 03:01:32.936711 (ntainerd)[1974]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 03:01:32.944185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:01:32.960485 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 03:01:32.971842 jq[1962]: true Mar 6 03:01:32.982836 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Failure getting users, quitting Mar 6 03:01:32.986395 oslogin_cache_refresh[1947]: Failure getting users, quitting Mar 6 03:01:32.991445 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 03:01:32.991445 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Refreshing group entry cache Mar 6 03:01:32.987441 oslogin_cache_refresh[1947]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 03:01:32.987505 oslogin_cache_refresh[1947]: Refreshing group entry cache Mar 6 03:01:32.994077 update_engine[1960]: I20260306 03:01:32.993165 1960 main.cc:92] Flatcar Update Engine starting Mar 6 03:01:33.010329 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Failure getting groups, quitting Mar 6 03:01:33.010329 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 03:01:33.000466 oslogin_cache_refresh[1947]: Failure getting groups, quitting Mar 6 03:01:33.002071 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 6 03:01:33.000482 oslogin_cache_refresh[1947]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 03:01:33.003137 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 6 03:01:33.034321 extend-filesystems[1946]: Resized partition /dev/nvme0n1p9 Mar 6 03:01:33.043962 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 6 03:01:33.055564 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 6 03:01:33.056984 coreos-metadata[1942]: Mar 06 03:01:33.056 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 03:01:33.058460 coreos-metadata[1942]: Mar 06 03:01:33.058 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 6 03:01:33.062606 coreos-metadata[1942]: Mar 06 03:01:33.062 INFO Fetch successful Mar 6 03:01:33.062606 coreos-metadata[1942]: Mar 06 03:01:33.062 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 6 03:01:33.063331 coreos-metadata[1942]: Mar 06 03:01:33.063 INFO Fetch successful Mar 6 03:01:33.063331 coreos-metadata[1942]: Mar 06 03:01:33.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 6 03:01:33.064232 coreos-metadata[1942]: Mar 06 03:01:33.064 INFO Fetch successful Mar 6 03:01:33.064232 coreos-metadata[1942]: Mar 06 03:01:33.064 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 6 03:01:33.064793 coreos-metadata[1942]: Mar 06 03:01:33.064 INFO Fetch successful Mar 6 03:01:33.064793 coreos-metadata[1942]: Mar 06 03:01:33.064 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 6 03:01:33.068031 coreos-metadata[1942]: Mar 06 03:01:33.065 INFO Fetch failed with 404: resource not found Mar 6 03:01:33.068031 coreos-metadata[1942]: Mar 06 03:01:33.065 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 6 03:01:33.078205 extend-filesystems[2006]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 03:01:33.082030 coreos-metadata[1942]: Mar 06 03:01:33.071 INFO Fetch successful Mar 6 03:01:33.082030 coreos-metadata[1942]: Mar 06 03:01:33.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 6 03:01:33.085734 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 6 03:01:33.085825 coreos-metadata[1942]: Mar 06 03:01:33.084 INFO Fetch successful Mar 6 03:01:33.085825 coreos-metadata[1942]: Mar 06 03:01:33.084 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 6 03:01:33.088007 coreos-metadata[1942]: Mar 06 03:01:33.087 INFO Fetch successful Mar 6 03:01:33.088007 coreos-metadata[1942]: Mar 06 03:01:33.087 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 6 03:01:33.087684 dbus-daemon[1943]: [system] SELinux support is enabled Mar 6 03:01:33.089159 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 03:01:33.093411 jq[1991]: true Mar 6 03:01:33.093591 update_engine[1960]: I20260306 03:01:33.091471 1960 update_check_scheduler.cc:74] Next update check in 4m21s Mar 6 03:01:33.089958 dbus-daemon[1943]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1587 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 6 03:01:33.093929 ntpd[1949]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:01:33.094001 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: ---------------------------------------------------- Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: corporation. Support and training for ntp-4 are Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: available at https://www.nwtime.org/support Mar 6 03:01:33.094324 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: ---------------------------------------------------- Mar 6 03:01:33.094012 ntpd[1949]: ---------------------------------------------------- Mar 6 03:01:33.094022 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:01:33.094031 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:01:33.094040 ntpd[1949]: corporation. Support and training for ntp-4 are Mar 6 03:01:33.094049 ntpd[1949]: available at https://www.nwtime.org/support Mar 6 03:01:33.094059 ntpd[1949]: ---------------------------------------------------- Mar 6 03:01:33.097202 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 03:01:33.098445 coreos-metadata[1942]: Mar 06 03:01:33.098 INFO Fetch successful Mar 6 03:01:33.098445 coreos-metadata[1942]: Mar 06 03:01:33.098 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 6 03:01:33.097518 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 03:01:33.100139 coreos-metadata[1942]: Mar 06 03:01:33.098 INFO Fetch successful Mar 6 03:01:33.105664 ntpd[1949]: proto: precision = 0.070 usec (-24) Mar 6 03:01:33.106610 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: proto: precision = 0.070 usec (-24) Mar 6 03:01:33.112858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 03:01:33.112916 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 03:01:33.113620 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 03:01:33.113648 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 03:01:33.115949 ntpd[1949]: basedate set to 2026-02-21 Mar 6 03:01:33.117622 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: basedate set to 2026-02-21 Mar 6 03:01:33.117622 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: gps base set to 2026-02-22 (week 2407) Mar 6 03:01:33.117622 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:01:33.117622 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:01:33.117622 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:01:33.115977 ntpd[1949]: gps base set to 2026-02-22 (week 2407) Mar 6 03:01:33.116122 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:01:33.116149 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:01:33.116366 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:01:33.121668 tar[1980]: linux-amd64/LICENSE Mar 6 03:01:33.121971 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listen normally on 3 eth0 172.31.19.55:123 Mar 6 03:01:33.121971 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listen normally on 4 lo [::1]:123 Mar 6 03:01:33.121971 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listen normally on 5 eth0 [fe80::41d:1ff:fe23:10fb%2]:123 Mar 6 03:01:33.121971 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: Listening on routing socket on fd #22 for interface updates Mar 6 03:01:33.121334 ntpd[1949]: Listen normally on 3 eth0 172.31.19.55:123 Mar 6 03:01:33.121390 ntpd[1949]: Listen normally on 4 lo [::1]:123 Mar 6 03:01:33.123177 tar[1980]: linux-amd64/helm Mar 6 03:01:33.121418 ntpd[1949]: Listen normally on 5 eth0 [fe80::41d:1ff:fe23:10fb%2]:123 Mar 6 03:01:33.121446 ntpd[1949]: Listening on routing socket on fd #22 for interface updates Mar 6 03:01:33.130860 systemd[1]: Started update-engine.service - Update Engine. Mar 6 03:01:33.146272 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 03:01:33.153571 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:33.153792 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:33.153792 ntpd[1949]: 6 Mar 03:01:33 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:33.153615 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:33.196307 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 6 03:01:33.216707 extend-filesystems[2006]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 6 03:01:33.216707 extend-filesystems[2006]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 6 03:01:33.216707 extend-filesystems[2006]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 6 03:01:33.242883 extend-filesystems[1946]: Resized filesystem in /dev/nvme0n1p9 Mar 6 03:01:33.221604 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 03:01:33.224418 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 03:01:33.224747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 03:01:33.230443 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 03:01:33.240290 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 6 03:01:33.313833 bash[2044]: Updated "/home/core/.ssh/authorized_keys" Mar 6 03:01:33.317144 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 03:01:33.335546 systemd[1]: Starting sshkeys.service... Mar 6 03:01:33.393439 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 6 03:01:33.394387 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 03:01:33.423752 systemd-logind[1959]: Watching system buttons on /dev/input/event2 (Power Button) Mar 6 03:01:33.423783 systemd-logind[1959]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 6 03:01:33.423808 systemd-logind[1959]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 03:01:33.426396 systemd-logind[1959]: New seat seat0. Mar 6 03:01:33.430329 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 03:01:33.439289 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 6 03:01:33.476292 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 6 03:01:33.565684 amazon-ssm-agent[2004]: Initializing new seelog logger Mar 6 03:01:33.570585 amazon-ssm-agent[2004]: New Seelog Logger Creation Complete Mar 6 03:01:33.570585 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.570585 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.571317 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 processing appconfig overrides Mar 6 03:01:33.573391 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.573391 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.573391 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 processing appconfig overrides Mar 6 03:01:33.573391 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.573391 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.574521 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 processing appconfig overrides Mar 6 03:01:33.576500 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.5715 INFO Proxy environment variables: Mar 6 03:01:33.589755 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.589755 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:33.589755 amazon-ssm-agent[2004]: 2026/03/06 03:01:33 processing appconfig overrides Mar 6 03:01:33.676828 coreos-metadata[2060]: Mar 06 03:01:33.675 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 03:01:33.676828 coreos-metadata[2060]: Mar 06 03:01:33.676 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 6 03:01:33.677684 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.5718 INFO https_proxy: Mar 6 03:01:33.680232 coreos-metadata[2060]: Mar 06 03:01:33.677 INFO Fetch successful Mar 6 03:01:33.680232 coreos-metadata[2060]: Mar 06 03:01:33.677 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 6 03:01:33.680232 coreos-metadata[2060]: Mar 06 03:01:33.678 INFO Fetch successful Mar 6 03:01:33.680657 unknown[2060]: wrote ssh authorized keys file for user: core Mar 6 03:01:33.740196 update-ssh-keys[2105]: Updated "/home/core/.ssh/authorized_keys" Mar 6 03:01:33.742829 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 6 03:01:33.748218 systemd[1]: Finished sshkeys.service. Mar 6 03:01:33.780791 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.5718 INFO http_proxy: Mar 6 03:01:33.806123 locksmithd[2016]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 03:01:33.880999 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.5718 INFO no_proxy: Mar 6 03:01:33.881547 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 6 03:01:33.887727 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 6 03:01:33.890585 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2043 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 6 03:01:33.902195 systemd[1]: Starting polkit.service - Authorization Manager... Mar 6 03:01:33.981991 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.5719 INFO Checking if agent identity type OnPrem can be assumed Mar 6 03:01:34.042303 containerd[1974]: time="2026-03-06T03:01:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 03:01:34.056471 containerd[1974]: time="2026-03-06T03:01:34.056424259Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 03:01:34.080399 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.5722 INFO Checking if agent identity type EC2 can be assumed Mar 6 03:01:34.116737 containerd[1974]: time="2026-03-06T03:01:34.116675828Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.713µs" Mar 6 03:01:34.116737 containerd[1974]: time="2026-03-06T03:01:34.116730557Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 03:01:34.116890 containerd[1974]: time="2026-03-06T03:01:34.116759086Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 03:01:34.118440 containerd[1974]: time="2026-03-06T03:01:34.116946407Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 03:01:34.118440 containerd[1974]: time="2026-03-06T03:01:34.116974505Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 03:01:34.118440 containerd[1974]: time="2026-03-06T03:01:34.117034237Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 03:01:34.118440 containerd[1974]: time="2026-03-06T03:01:34.117116261Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 03:01:34.118440 containerd[1974]: time="2026-03-06T03:01:34.117131618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.123453540Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.123494576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.123514735Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.123527975Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.123678746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.123993120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.124073706Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.124091947Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.124130148Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.124479964Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 03:01:34.124865 containerd[1974]: time="2026-03-06T03:01:34.124555524Z" level=info msg="metadata content store policy set" policy=shared Mar 6 03:01:34.133418 containerd[1974]: time="2026-03-06T03:01:34.133348216Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143440014Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143493484Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143512045Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143530100Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143545431Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143561084Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143578174Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143593149Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143612548Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143632997Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143651054Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143823389Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143845329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 03:01:34.147739 containerd[1974]: time="2026-03-06T03:01:34.143865156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143879961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143894575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143908422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143924417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143946211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143972362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.143987581Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.144008229Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.144073993Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.144090905Z" level=info msg="Start snapshots syncer" Mar 6 03:01:34.148334 containerd[1974]: time="2026-03-06T03:01:34.144111133Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 03:01:34.148777 containerd[1974]: time="2026-03-06T03:01:34.146533925Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 03:01:34.148777 containerd[1974]: time="2026-03-06T03:01:34.146630443Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146732232Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146895106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146923067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146938069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146954263Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146970872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146985379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.146999571Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.147037595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.147060613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.147075483Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.147112189Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.147132575Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 03:01:34.148968 containerd[1974]: time="2026-03-06T03:01:34.147144886Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147157925Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147169463Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147182558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147216008Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147237041Z" level=info msg="runtime interface created" Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147245866Z" level=info msg="created NRI interface" Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.147258253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.164350429Z" level=info msg="Connect containerd service" Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.164418171Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 03:01:34.166045 containerd[1974]: time="2026-03-06T03:01:34.165360180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 03:01:34.180835 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8469 INFO Agent will take identity from EC2 Mar 6 03:01:34.282437 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8505 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 6 03:01:34.380837 polkitd[2138]: Started polkitd version 126 Mar 6 03:01:34.382199 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8505 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 6 03:01:34.403988 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Mar 6 03:01:34.412519 polkitd[2138]: Loading rules from directory /run/polkit-1/rules.d Mar 6 03:01:34.412715 polkitd[2138]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 03:01:34.416817 polkitd[2138]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 6 03:01:34.416895 polkitd[2138]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 03:01:34.416966 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 6 03:01:34.422186 polkitd[2138]: Finished loading, compiling and executing 2 rules Mar 6 03:01:34.422537 systemd[1]: Started polkit.service - Authorization Manager. Mar 6 03:01:34.423722 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 6 03:01:34.426612 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 6 03:01:34.462673 systemd-hostnamed[2043]: Hostname set to (transient) Mar 6 03:01:34.462951 systemd-resolved[1895]: System hostname changed to 'ip-172-31-19-55'. Mar 6 03:01:34.492299 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8505 INFO [amazon-ssm-agent] Starting Core Agent Mar 6 03:01:34.592402 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8505 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 6 03:01:34.644024 containerd[1974]: time="2026-03-06T03:01:34.643829010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 03:01:34.644024 containerd[1974]: time="2026-03-06T03:01:34.643909510Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 03:01:34.644024 containerd[1974]: time="2026-03-06T03:01:34.643940203Z" level=info msg="Start subscribing containerd event" Mar 6 03:01:34.644024 containerd[1974]: time="2026-03-06T03:01:34.643973187Z" level=info msg="Start recovering state" Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644077012Z" level=info msg="Start event monitor" Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644095376Z" level=info msg="Start cni network conf syncer for default" Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644106617Z" level=info msg="Start streaming server" Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644117528Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644127097Z" level=info msg="runtime interface starting up..." Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644134398Z" level=info msg="starting plugins..." Mar 6 03:01:34.644258 containerd[1974]: time="2026-03-06T03:01:34.644149319Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 03:01:34.649559 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 03:01:34.650474 containerd[1974]: time="2026-03-06T03:01:34.650253631Z" level=info msg="containerd successfully booted in 0.610460s" Mar 6 03:01:34.693442 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8505 INFO [Registrar] Starting registrar module Mar 6 03:01:34.758046 tar[1980]: linux-amd64/README.md Mar 6 03:01:34.780733 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 03:01:34.793646 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8524 INFO [EC2Identity] Checking disk for registration info Mar 6 03:01:34.893943 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8524 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 6 03:01:34.994942 amazon-ssm-agent[2004]: 2026-03-06 03:01:33.8524 INFO [EC2Identity] Generating registration keypair Mar 6 03:01:35.055261 amazon-ssm-agent[2004]: 2026/03/06 03:01:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:35.055261 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:01:35.055469 amazon-ssm-agent[2004]: 2026/03/06 03:01:35 processing appconfig overrides Mar 6 03:01:35.087379 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0098 INFO [EC2Identity] Checking write access before registering Mar 6 03:01:35.087674 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0104 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 6 03:01:35.088178 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0549 INFO [EC2Identity] EC2 registration was successful. Mar 6 03:01:35.088178 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0550 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 6 03:01:35.088348 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0551 INFO [CredentialRefresher] credentialRefresher has started Mar 6 03:01:35.088348 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0551 INFO [CredentialRefresher] Starting credentials refresher loop Mar 6 03:01:35.088348 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0866 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 6 03:01:35.088348 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0868 INFO [CredentialRefresher] Credentials ready Mar 6 03:01:35.094743 amazon-ssm-agent[2004]: 2026-03-06 03:01:35.0885 INFO [CredentialRefresher] Next credential rotation will be in 29.99996833855 minutes Mar 6 03:01:35.150393 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 03:01:35.309130 sshd_keygen[1972]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 03:01:35.332750 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 03:01:35.335386 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 03:01:35.342699 systemd[1]: Started sshd@0-172.31.19.55:22-68.220.241.50:55830.service - OpenSSH per-connection server daemon (68.220.241.50:55830). Mar 6 03:01:35.363335 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 03:01:35.363767 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 03:01:35.367652 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 03:01:35.388799 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 03:01:35.398797 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 03:01:35.403752 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 03:01:35.405482 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 03:01:35.836510 sshd[2200]: Accepted publickey for core from 68.220.241.50 port 55830 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:35.839370 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:35.847171 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 03:01:35.848928 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 03:01:35.864142 systemd-logind[1959]: New session 1 of user core. Mar 6 03:01:35.872954 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 03:01:35.876772 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 03:01:35.891120 (systemd)[2212]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 03:01:35.894057 systemd-logind[1959]: New session c1 of user core. Mar 6 03:01:36.055205 systemd[2212]: Queued start job for default target default.target. Mar 6 03:01:36.065812 systemd[2212]: Created slice app.slice - User Application Slice. Mar 6 03:01:36.065854 systemd[2212]: Reached target paths.target - Paths. Mar 6 03:01:36.065910 systemd[2212]: Reached target timers.target - Timers. Mar 6 03:01:36.068449 systemd[2212]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 03:01:36.080737 systemd[2212]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 03:01:36.080894 systemd[2212]: Reached target sockets.target - Sockets. Mar 6 03:01:36.080961 systemd[2212]: Reached target basic.target - Basic System. Mar 6 03:01:36.081156 systemd[2212]: Reached target default.target - Main User Target. Mar 6 03:01:36.081203 systemd[2212]: Startup finished in 179ms. Mar 6 03:01:36.081206 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 03:01:36.086454 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 03:01:36.101113 amazon-ssm-agent[2004]: 2026-03-06 03:01:36.1007 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 6 03:01:36.205299 amazon-ssm-agent[2004]: 2026-03-06 03:01:36.1026 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2223) started Mar 6 03:01:36.304857 amazon-ssm-agent[2004]: 2026-03-06 03:01:36.1027 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 6 03:01:36.339692 systemd[1]: Started sshd@1-172.31.19.55:22-68.220.241.50:55844.service - OpenSSH per-connection server daemon (68.220.241.50:55844). Mar 6 03:01:36.776033 sshd[2236]: Accepted publickey for core from 68.220.241.50 port 55844 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:36.777675 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:36.783840 systemd-logind[1959]: New session 2 of user core. Mar 6 03:01:36.794559 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 03:01:37.015003 sshd[2239]: Connection closed by 68.220.241.50 port 55844 Mar 6 03:01:37.016509 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:37.020796 systemd-logind[1959]: Session 2 logged out. Waiting for processes to exit. Mar 6 03:01:37.021908 systemd[1]: sshd@1-172.31.19.55:22-68.220.241.50:55844.service: Deactivated successfully. Mar 6 03:01:37.024040 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 03:01:37.025725 systemd-logind[1959]: Removed session 2. Mar 6 03:01:37.107094 systemd[1]: Started sshd@2-172.31.19.55:22-68.220.241.50:55860.service - OpenSSH per-connection server daemon (68.220.241.50:55860). Mar 6 03:01:37.545203 sshd[2245]: Accepted publickey for core from 68.220.241.50 port 55860 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:37.546585 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:37.551350 systemd-logind[1959]: New session 3 of user core. Mar 6 03:01:37.559533 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 03:01:37.784398 sshd[2248]: Connection closed by 68.220.241.50 port 55860 Mar 6 03:01:37.786347 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:37.790468 systemd-logind[1959]: Session 3 logged out. Waiting for processes to exit. Mar 6 03:01:37.791252 systemd[1]: sshd@2-172.31.19.55:22-68.220.241.50:55860.service: Deactivated successfully. Mar 6 03:01:37.793918 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 03:01:37.795692 systemd-logind[1959]: Removed session 3. Mar 6 03:01:39.051947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:39.054405 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 03:01:39.055819 systemd[1]: Startup finished in 2.636s (kernel) + 7.816s (initrd) + 10.659s (userspace) = 21.112s. Mar 6 03:01:39.064537 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:01:41.294458 systemd-resolved[1895]: Clock change detected. Flushing caches. Mar 6 03:01:42.180001 kubelet[2258]: E0306 03:01:42.179942 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:01:42.182869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:01:42.183092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:01:42.183697 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 258.8M memory peak. Mar 6 03:01:49.085178 systemd[1]: Started sshd@3-172.31.19.55:22-68.220.241.50:43348.service - OpenSSH per-connection server daemon (68.220.241.50:43348). Mar 6 03:01:49.520646 sshd[2270]: Accepted publickey for core from 68.220.241.50 port 43348 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:49.522213 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:49.528130 systemd-logind[1959]: New session 4 of user core. Mar 6 03:01:49.534342 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 03:01:49.760017 sshd[2273]: Connection closed by 68.220.241.50 port 43348 Mar 6 03:01:49.762045 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:49.766362 systemd[1]: sshd@3-172.31.19.55:22-68.220.241.50:43348.service: Deactivated successfully. Mar 6 03:01:49.768428 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 03:01:49.769454 systemd-logind[1959]: Session 4 logged out. Waiting for processes to exit. Mar 6 03:01:49.771187 systemd-logind[1959]: Removed session 4. Mar 6 03:01:49.848694 systemd[1]: Started sshd@4-172.31.19.55:22-68.220.241.50:43356.service - OpenSSH per-connection server daemon (68.220.241.50:43356). Mar 6 03:01:50.283606 sshd[2279]: Accepted publickey for core from 68.220.241.50 port 43356 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:50.285045 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:50.291154 systemd-logind[1959]: New session 5 of user core. Mar 6 03:01:50.299372 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 03:01:50.516929 sshd[2282]: Connection closed by 68.220.241.50 port 43356 Mar 6 03:01:50.517682 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:50.522441 systemd[1]: sshd@4-172.31.19.55:22-68.220.241.50:43356.service: Deactivated successfully. Mar 6 03:01:50.524371 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 03:01:50.525461 systemd-logind[1959]: Session 5 logged out. Waiting for processes to exit. Mar 6 03:01:50.526967 systemd-logind[1959]: Removed session 5. Mar 6 03:01:50.610342 systemd[1]: Started sshd@5-172.31.19.55:22-68.220.241.50:43364.service - OpenSSH per-connection server daemon (68.220.241.50:43364). Mar 6 03:01:51.055631 sshd[2288]: Accepted publickey for core from 68.220.241.50 port 43364 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:51.057297 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:51.063209 systemd-logind[1959]: New session 6 of user core. Mar 6 03:01:51.068338 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 03:01:51.295206 sshd[2291]: Connection closed by 68.220.241.50 port 43364 Mar 6 03:01:51.297215 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:51.301411 systemd[1]: sshd@5-172.31.19.55:22-68.220.241.50:43364.service: Deactivated successfully. Mar 6 03:01:51.303293 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 03:01:51.304390 systemd-logind[1959]: Session 6 logged out. Waiting for processes to exit. Mar 6 03:01:51.306087 systemd-logind[1959]: Removed session 6. Mar 6 03:01:51.383507 systemd[1]: Started sshd@6-172.31.19.55:22-68.220.241.50:43372.service - OpenSSH per-connection server daemon (68.220.241.50:43372). Mar 6 03:01:51.819554 sshd[2297]: Accepted publickey for core from 68.220.241.50 port 43372 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:51.821032 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:51.827132 systemd-logind[1959]: New session 7 of user core. Mar 6 03:01:51.837345 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 03:01:52.024694 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 03:01:52.025158 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:52.040449 sudo[2301]: pam_unix(sudo:session): session closed for user root Mar 6 03:01:52.118616 sshd[2300]: Connection closed by 68.220.241.50 port 43372 Mar 6 03:01:52.119597 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:52.123849 systemd[1]: sshd@6-172.31.19.55:22-68.220.241.50:43372.service: Deactivated successfully. Mar 6 03:01:52.126131 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 03:01:52.127748 systemd-logind[1959]: Session 7 logged out. Waiting for processes to exit. Mar 6 03:01:52.129664 systemd-logind[1959]: Removed session 7. Mar 6 03:01:52.208825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 03:01:52.210875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:01:52.214373 systemd[1]: Started sshd@7-172.31.19.55:22-68.220.241.50:38968.service - OpenSSH per-connection server daemon (68.220.241.50:38968). Mar 6 03:01:52.463473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:52.477579 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:01:52.522431 kubelet[2318]: E0306 03:01:52.522350 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:01:52.526380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:01:52.526565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:01:52.526932 systemd[1]: kubelet.service: Consumed 184ms CPU time, 110.4M memory peak. Mar 6 03:01:52.663017 sshd[2308]: Accepted publickey for core from 68.220.241.50 port 38968 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:52.664552 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:52.670819 systemd-logind[1959]: New session 8 of user core. Mar 6 03:01:52.677372 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 03:01:52.828175 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 03:01:52.828550 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:52.835912 sudo[2328]: pam_unix(sudo:session): session closed for user root Mar 6 03:01:52.841753 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 03:01:52.842152 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:52.853122 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 03:01:52.904247 augenrules[2350]: No rules Mar 6 03:01:52.905619 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 03:01:52.906004 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 03:01:52.907306 sudo[2327]: pam_unix(sudo:session): session closed for user root Mar 6 03:01:52.987298 sshd[2326]: Connection closed by 68.220.241.50 port 38968 Mar 6 03:01:52.988341 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:52.993192 systemd[1]: sshd@7-172.31.19.55:22-68.220.241.50:38968.service: Deactivated successfully. Mar 6 03:01:52.995202 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 03:01:52.996352 systemd-logind[1959]: Session 8 logged out. Waiting for processes to exit. Mar 6 03:01:52.997999 systemd-logind[1959]: Removed session 8. Mar 6 03:01:53.087931 systemd[1]: Started sshd@8-172.31.19.55:22-68.220.241.50:38976.service - OpenSSH per-connection server daemon (68.220.241.50:38976). Mar 6 03:01:53.520963 sshd[2359]: Accepted publickey for core from 68.220.241.50 port 38976 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:01:53.522072 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:53.527934 systemd-logind[1959]: New session 9 of user core. Mar 6 03:01:53.537329 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 03:01:53.681434 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 03:01:53.681801 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:54.298710 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 03:01:54.310677 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 03:01:54.816637 dockerd[2381]: time="2026-03-06T03:01:54.816070928Z" level=info msg="Starting up" Mar 6 03:01:54.817229 dockerd[2381]: time="2026-03-06T03:01:54.817197953Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 03:01:54.829181 dockerd[2381]: time="2026-03-06T03:01:54.829132420Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 03:01:54.911132 dockerd[2381]: time="2026-03-06T03:01:54.911059883Z" level=info msg="Loading containers: start." Mar 6 03:01:54.925128 kernel: Initializing XFRM netlink socket Mar 6 03:01:55.200071 (udev-worker)[2402]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:01:55.247817 systemd-networkd[1587]: docker0: Link UP Mar 6 03:01:55.258533 dockerd[2381]: time="2026-03-06T03:01:55.258472835Z" level=info msg="Loading containers: done." Mar 6 03:01:55.275650 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck10353687-merged.mount: Deactivated successfully. Mar 6 03:01:55.284486 dockerd[2381]: time="2026-03-06T03:01:55.284396879Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 03:01:55.284683 dockerd[2381]: time="2026-03-06T03:01:55.284547754Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 03:01:55.284683 dockerd[2381]: time="2026-03-06T03:01:55.284661406Z" level=info msg="Initializing buildkit" Mar 6 03:01:55.325250 dockerd[2381]: time="2026-03-06T03:01:55.325198994Z" level=info msg="Completed buildkit initialization" Mar 6 03:01:55.329576 dockerd[2381]: time="2026-03-06T03:01:55.329513632Z" level=info msg="Daemon has completed initialization" Mar 6 03:01:55.330157 dockerd[2381]: time="2026-03-06T03:01:55.329733969Z" level=info msg="API listen on /run/docker.sock" Mar 6 03:01:55.329773 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 03:01:56.374122 containerd[1974]: time="2026-03-06T03:01:56.374070058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 6 03:01:56.954208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117917825.mount: Deactivated successfully. Mar 6 03:01:58.674432 containerd[1974]: time="2026-03-06T03:01:58.674374372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:58.675572 containerd[1974]: time="2026-03-06T03:01:58.675527987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 6 03:01:58.677741 containerd[1974]: time="2026-03-06T03:01:58.676560129Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:58.679322 containerd[1974]: time="2026-03-06T03:01:58.679286237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:58.680289 containerd[1974]: time="2026-03-06T03:01:58.680254049Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.306117406s" Mar 6 03:01:58.680372 containerd[1974]: time="2026-03-06T03:01:58.680296884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 6 03:01:58.681058 containerd[1974]: time="2026-03-06T03:01:58.681033925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 6 03:01:58.889659 systemd[1]: Started sshd@9-172.31.19.55:22-130.12.180.95:19558.service - OpenSSH per-connection server daemon (130.12.180.95:19558). Mar 6 03:01:58.920205 sshd[2653]: banner exchange: Connection from 130.12.180.95 port 19558: invalid format Mar 6 03:01:58.921072 systemd[1]: sshd@9-172.31.19.55:22-130.12.180.95:19558.service: Deactivated successfully. Mar 6 03:02:00.206737 containerd[1974]: time="2026-03-06T03:02:00.206674992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:00.216626 containerd[1974]: time="2026-03-06T03:02:00.214863604Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 6 03:02:00.222318 containerd[1974]: time="2026-03-06T03:02:00.222221496Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:00.233249 containerd[1974]: time="2026-03-06T03:02:00.233168387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:00.236381 containerd[1974]: time="2026-03-06T03:02:00.235890238Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.554465124s" Mar 6 03:02:00.236381 containerd[1974]: time="2026-03-06T03:02:00.235950250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 6 03:02:00.237607 containerd[1974]: time="2026-03-06T03:02:00.237258344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 6 03:02:01.700842 containerd[1974]: time="2026-03-06T03:02:01.700639470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:01.704604 containerd[1974]: time="2026-03-06T03:02:01.702856676Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 6 03:02:01.719124 containerd[1974]: time="2026-03-06T03:02:01.718771604Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:01.725976 containerd[1974]: time="2026-03-06T03:02:01.725890916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:01.727190 containerd[1974]: time="2026-03-06T03:02:01.726976161Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.489674819s" Mar 6 03:02:01.727190 containerd[1974]: time="2026-03-06T03:02:01.727024664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 6 03:02:01.730003 containerd[1974]: time="2026-03-06T03:02:01.729543886Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 6 03:02:02.605550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 03:02:02.613287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:02.937882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436247099.mount: Deactivated successfully. Mar 6 03:02:03.634929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:03.647298 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:02:03.730311 kubelet[2681]: E0306 03:02:03.730264 2681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:02:03.734965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:02:03.736012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:02:03.736775 systemd[1]: kubelet.service: Consumed 209ms CPU time, 109.8M memory peak. Mar 6 03:02:03.892166 containerd[1974]: time="2026-03-06T03:02:03.892015991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:03.894194 containerd[1974]: time="2026-03-06T03:02:03.894123810Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 6 03:02:03.895463 containerd[1974]: time="2026-03-06T03:02:03.895403478Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:03.897853 containerd[1974]: time="2026-03-06T03:02:03.897795112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:03.898502 containerd[1974]: time="2026-03-06T03:02:03.898354370Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 2.168765501s" Mar 6 03:02:03.898502 containerd[1974]: time="2026-03-06T03:02:03.898394666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 6 03:02:03.899207 containerd[1974]: time="2026-03-06T03:02:03.899182856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 6 03:02:04.385486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739459907.mount: Deactivated successfully. Mar 6 03:02:05.671261 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 6 03:02:06.111214 containerd[1974]: time="2026-03-06T03:02:06.111157471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:06.112640 containerd[1974]: time="2026-03-06T03:02:06.112447900Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 6 03:02:06.114192 containerd[1974]: time="2026-03-06T03:02:06.114159577Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:06.117224 containerd[1974]: time="2026-03-06T03:02:06.117157821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:06.118550 containerd[1974]: time="2026-03-06T03:02:06.118351618Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.219133395s" Mar 6 03:02:06.118550 containerd[1974]: time="2026-03-06T03:02:06.118390305Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 6 03:02:06.119131 containerd[1974]: time="2026-03-06T03:02:06.118985659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 03:02:06.595766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732788589.mount: Deactivated successfully. Mar 6 03:02:06.601749 containerd[1974]: time="2026-03-06T03:02:06.601700409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:06.602573 containerd[1974]: time="2026-03-06T03:02:06.602528308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 6 03:02:06.603750 containerd[1974]: time="2026-03-06T03:02:06.603698770Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:06.605800 containerd[1974]: time="2026-03-06T03:02:06.605751739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:06.607983 containerd[1974]: time="2026-03-06T03:02:06.607364454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 488.11984ms" Mar 6 03:02:06.607983 containerd[1974]: time="2026-03-06T03:02:06.607406452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 6 03:02:06.609120 containerd[1974]: time="2026-03-06T03:02:06.609086503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 6 03:02:07.103913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112378230.mount: Deactivated successfully. Mar 6 03:02:08.079695 containerd[1974]: time="2026-03-06T03:02:08.079643072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:08.084124 containerd[1974]: time="2026-03-06T03:02:08.084068171Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 6 03:02:08.088194 containerd[1974]: time="2026-03-06T03:02:08.088136838Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:08.093035 containerd[1974]: time="2026-03-06T03:02:08.092991401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:08.094426 containerd[1974]: time="2026-03-06T03:02:08.094239001Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.485006974s" Mar 6 03:02:08.094426 containerd[1974]: time="2026-03-06T03:02:08.094282381Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 6 03:02:11.692616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:11.692897 systemd[1]: kubelet.service: Consumed 209ms CPU time, 109.8M memory peak. Mar 6 03:02:11.696235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:11.731786 systemd[1]: Reload requested from client PID 2837 ('systemctl') (unit session-9.scope)... Mar 6 03:02:11.731809 systemd[1]: Reloading... Mar 6 03:02:11.892149 zram_generator::config[2884]: No configuration found. Mar 6 03:02:12.163434 systemd[1]: Reloading finished in 430 ms. Mar 6 03:02:12.225209 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 03:02:12.225497 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 03:02:12.225843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:12.225897 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.3M memory peak. Mar 6 03:02:12.228356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:12.538919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:12.549526 (kubelet)[2945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 03:02:12.607419 kubelet[2945]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 03:02:12.608002 kubelet[2945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:02:12.610128 kubelet[2945]: I0306 03:02:12.609903 2945 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 03:02:13.059066 kubelet[2945]: I0306 03:02:13.059021 2945 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 03:02:13.059066 kubelet[2945]: I0306 03:02:13.059048 2945 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 03:02:13.059898 kubelet[2945]: I0306 03:02:13.059875 2945 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 03:02:13.059955 kubelet[2945]: I0306 03:02:13.059897 2945 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 03:02:13.060216 kubelet[2945]: I0306 03:02:13.060196 2945 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 03:02:13.072074 kubelet[2945]: I0306 03:02:13.071501 2945 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 03:02:13.074846 kubelet[2945]: E0306 03:02:13.074808 2945 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 03:02:13.079206 kubelet[2945]: I0306 03:02:13.079178 2945 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 03:02:13.082825 kubelet[2945]: I0306 03:02:13.082555 2945 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 03:02:13.089597 kubelet[2945]: I0306 03:02:13.089556 2945 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 03:02:13.089908 kubelet[2945]: I0306 03:02:13.089712 2945 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 03:02:13.090048 kubelet[2945]: I0306 03:02:13.090040 2945 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 03:02:13.090089 kubelet[2945]: I0306 03:02:13.090084 2945 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 03:02:13.090252 kubelet[2945]: I0306 03:02:13.090231 2945 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 03:02:13.092459 kubelet[2945]: I0306 03:02:13.092435 2945 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:02:13.092670 kubelet[2945]: I0306 03:02:13.092653 2945 kubelet.go:475] "Attempting to sync node with API server" Mar 6 03:02:13.092824 kubelet[2945]: I0306 03:02:13.092680 2945 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 03:02:13.092824 kubelet[2945]: I0306 03:02:13.092713 2945 kubelet.go:387] "Adding apiserver pod source" Mar 6 03:02:13.092824 kubelet[2945]: I0306 03:02:13.092727 2945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 03:02:13.095120 kubelet[2945]: E0306 03:02:13.094663 2945 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-55&limit=500&resourceVersion=0\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 03:02:13.096500 kubelet[2945]: I0306 03:02:13.096476 2945 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 03:02:13.097275 kubelet[2945]: I0306 03:02:13.097257 2945 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 03:02:13.097333 kubelet[2945]: I0306 03:02:13.097299 2945 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 03:02:13.097374 kubelet[2945]: W0306 03:02:13.097356 2945 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 03:02:13.102479 kubelet[2945]: I0306 03:02:13.102458 2945 server.go:1262] "Started kubelet" Mar 6 03:02:13.103234 kubelet[2945]: E0306 03:02:13.103208 2945 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 03:02:13.103441 kubelet[2945]: I0306 03:02:13.103421 2945 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 03:02:13.104520 kubelet[2945]: I0306 03:02:13.104507 2945 server.go:310] "Adding debug handlers to kubelet server" Mar 6 03:02:13.109611 kubelet[2945]: I0306 03:02:13.109559 2945 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 03:02:13.109736 kubelet[2945]: I0306 03:02:13.109642 2945 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 03:02:13.109894 kubelet[2945]: I0306 03:02:13.109874 2945 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 03:02:13.110377 kubelet[2945]: I0306 03:02:13.110358 2945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 03:02:13.112227 kubelet[2945]: E0306 03:02:13.110131 2945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.55:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-55.189a2167d169bd27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-55,UID:ip-172-31-19-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-55,},FirstTimestamp:2026-03-06 03:02:13.102427431 +0000 UTC m=+0.547438218,LastTimestamp:2026-03-06 03:02:13.102427431 +0000 UTC m=+0.547438218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-55,}" Mar 6 03:02:13.114716 kubelet[2945]: E0306 03:02:13.114694 2945 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 03:02:13.115159 kubelet[2945]: I0306 03:02:13.115142 2945 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 03:02:13.119152 kubelet[2945]: E0306 03:02:13.119133 2945 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-19-55\" not found" Mar 6 03:02:13.119476 kubelet[2945]: I0306 03:02:13.119461 2945 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 03:02:13.119925 kubelet[2945]: I0306 03:02:13.119906 2945 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 03:02:13.120053 kubelet[2945]: I0306 03:02:13.120043 2945 reconciler.go:29] "Reconciler: start to sync state" Mar 6 03:02:13.120648 kubelet[2945]: E0306 03:02:13.120616 2945 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 03:02:13.121394 kubelet[2945]: I0306 03:02:13.121376 2945 factory.go:223] Registration of the systemd container factory successfully Mar 6 03:02:13.121568 kubelet[2945]: I0306 03:02:13.121550 2945 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 03:02:13.123738 kubelet[2945]: E0306 03:02:13.123701 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": dial tcp 172.31.19.55:6443: connect: connection refused" interval="200ms" Mar 6 03:02:13.124657 kubelet[2945]: I0306 03:02:13.124629 2945 factory.go:223] Registration of the containerd container factory successfully Mar 6 03:02:13.141123 kubelet[2945]: I0306 03:02:13.137845 2945 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 03:02:13.141123 kubelet[2945]: I0306 03:02:13.140834 2945 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 03:02:13.141123 kubelet[2945]: I0306 03:02:13.140857 2945 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 03:02:13.141123 kubelet[2945]: I0306 03:02:13.140896 2945 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 03:02:13.141123 kubelet[2945]: E0306 03:02:13.140947 2945 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 03:02:13.153062 kubelet[2945]: E0306 03:02:13.153023 2945 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 03:02:13.170997 kubelet[2945]: I0306 03:02:13.170962 2945 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 03:02:13.170997 kubelet[2945]: I0306 03:02:13.170996 2945 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 03:02:13.171185 kubelet[2945]: I0306 03:02:13.171014 2945 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:02:13.172885 kubelet[2945]: I0306 03:02:13.172864 2945 policy_none.go:49] "None policy: Start" Mar 6 03:02:13.172885 kubelet[2945]: I0306 03:02:13.172885 2945 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 03:02:13.173066 kubelet[2945]: I0306 03:02:13.172899 2945 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 03:02:13.174594 kubelet[2945]: I0306 03:02:13.174565 2945 policy_none.go:47] "Start" Mar 6 03:02:13.179430 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 03:02:13.192991 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 03:02:13.197115 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 03:02:13.207367 kubelet[2945]: E0306 03:02:13.207317 2945 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 03:02:13.209133 kubelet[2945]: I0306 03:02:13.209024 2945 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 03:02:13.209237 kubelet[2945]: I0306 03:02:13.209058 2945 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 03:02:13.209542 kubelet[2945]: I0306 03:02:13.209509 2945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 03:02:13.211565 kubelet[2945]: E0306 03:02:13.211535 2945 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 03:02:13.211669 kubelet[2945]: E0306 03:02:13.211614 2945 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-55\" not found" Mar 6 03:02:13.255324 systemd[1]: Created slice kubepods-burstable-pod0cfa6a0d0c6fa188bb3ecea3e8389c17.slice - libcontainer container kubepods-burstable-pod0cfa6a0d0c6fa188bb3ecea3e8389c17.slice. Mar 6 03:02:13.265308 kubelet[2945]: E0306 03:02:13.265261 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:13.269486 systemd[1]: Created slice kubepods-burstable-pod2e239d4be55a5eea1623e024349f60fc.slice - libcontainer container kubepods-burstable-pod2e239d4be55a5eea1623e024349f60fc.slice. Mar 6 03:02:13.278921 kubelet[2945]: E0306 03:02:13.278889 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:13.281402 systemd[1]: Created slice kubepods-burstable-poda2c6cafe7f13dee6667cea467033dec0.slice - libcontainer container kubepods-burstable-poda2c6cafe7f13dee6667cea467033dec0.slice. Mar 6 03:02:13.283582 kubelet[2945]: E0306 03:02:13.283554 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:13.312150 kubelet[2945]: I0306 03:02:13.311303 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-55" Mar 6 03:02:13.312150 kubelet[2945]: E0306 03:02:13.311727 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.55:6443/api/v1/nodes\": dial tcp 172.31.19.55:6443: connect: connection refused" node="ip-172-31-19-55" Mar 6 03:02:13.324558 kubelet[2945]: E0306 03:02:13.324515 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": dial tcp 172.31.19.55:6443: connect: connection refused" interval="400ms" Mar 6 03:02:13.421995 kubelet[2945]: I0306 03:02:13.421952 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cfa6a0d0c6fa188bb3ecea3e8389c17-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-55\" (UID: \"0cfa6a0d0c6fa188bb3ecea3e8389c17\") " pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:13.421995 kubelet[2945]: I0306 03:02:13.421997 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:13.422250 kubelet[2945]: I0306 03:02:13.422025 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:13.422250 kubelet[2945]: I0306 03:02:13.422049 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:13.422250 kubelet[2945]: I0306 03:02:13.422074 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:13.422250 kubelet[2945]: I0306 03:02:13.422095 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cfa6a0d0c6fa188bb3ecea3e8389c17-ca-certs\") pod \"kube-apiserver-ip-172-31-19-55\" (UID: \"0cfa6a0d0c6fa188bb3ecea3e8389c17\") " pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:13.422250 kubelet[2945]: I0306 03:02:13.422153 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cfa6a0d0c6fa188bb3ecea3e8389c17-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-55\" (UID: \"0cfa6a0d0c6fa188bb3ecea3e8389c17\") " pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:13.422404 kubelet[2945]: I0306 03:02:13.422179 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:13.422404 kubelet[2945]: I0306 03:02:13.422203 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2c6cafe7f13dee6667cea467033dec0-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-55\" (UID: \"a2c6cafe7f13dee6667cea467033dec0\") " pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:13.514381 kubelet[2945]: I0306 03:02:13.514336 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-55" Mar 6 03:02:13.514713 kubelet[2945]: E0306 03:02:13.514681 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.55:6443/api/v1/nodes\": dial tcp 172.31.19.55:6443: connect: connection refused" node="ip-172-31-19-55" Mar 6 03:02:13.569706 containerd[1974]: time="2026-03-06T03:02:13.569580805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-55,Uid:0cfa6a0d0c6fa188bb3ecea3e8389c17,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:13.582316 containerd[1974]: time="2026-03-06T03:02:13.582273508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-55,Uid:2e239d4be55a5eea1623e024349f60fc,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:13.588246 containerd[1974]: time="2026-03-06T03:02:13.587654592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-55,Uid:a2c6cafe7f13dee6667cea467033dec0,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:13.725202 kubelet[2945]: E0306 03:02:13.725159 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": dial tcp 172.31.19.55:6443: connect: connection refused" interval="800ms" Mar 6 03:02:13.917026 kubelet[2945]: I0306 03:02:13.916994 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-55" Mar 6 03:02:13.917371 kubelet[2945]: E0306 03:02:13.917338 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.55:6443/api/v1/nodes\": dial tcp 172.31.19.55:6443: connect: connection refused" node="ip-172-31-19-55" Mar 6 03:02:14.011299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884871596.mount: Deactivated successfully. Mar 6 03:02:14.019906 containerd[1974]: time="2026-03-06T03:02:14.019853021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:14.023717 containerd[1974]: time="2026-03-06T03:02:14.023662691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 03:02:14.024485 containerd[1974]: time="2026-03-06T03:02:14.024441960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:14.025343 containerd[1974]: time="2026-03-06T03:02:14.025300052Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:14.028191 containerd[1974]: time="2026-03-06T03:02:14.027084918Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:14.028191 containerd[1974]: time="2026-03-06T03:02:14.028174259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 03:02:14.029209 containerd[1974]: time="2026-03-06T03:02:14.029159614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 03:02:14.031123 containerd[1974]: time="2026-03-06T03:02:14.030441816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:14.031601 containerd[1974]: time="2026-03-06T03:02:14.031555292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 455.889714ms" Mar 6 03:02:14.034554 containerd[1974]: time="2026-03-06T03:02:14.034509321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 444.485865ms" Mar 6 03:02:14.066079 containerd[1974]: time="2026-03-06T03:02:14.065629796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.598608ms" Mar 6 03:02:14.070923 containerd[1974]: time="2026-03-06T03:02:14.070880550Z" level=info msg="connecting to shim 994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da" address="unix:///run/containerd/s/58378d68ab07f5b621e250c28790c4122e0f050cd4f5f5ff258caec19af80f02" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:14.072970 containerd[1974]: time="2026-03-06T03:02:14.072932428Z" level=info msg="connecting to shim a51d00c55b0fede88b1d60b2da4f4b374af4a70e5002124de50893b50d07d137" address="unix:///run/containerd/s/63a20a023eded515c715469cefc1b3b8057b1feb8f9dc8c6acfb95a083243cf7" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:14.101299 containerd[1974]: time="2026-03-06T03:02:14.101251805Z" level=info msg="connecting to shim 1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b" address="unix:///run/containerd/s/bbb56e0d9a0f2ba25d419ad16821b5cb3eb4141507f3583ce6cb7f8425ef48e3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:14.120296 kubelet[2945]: E0306 03:02:14.120246 2945 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 03:02:14.124468 systemd[1]: Started cri-containerd-994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da.scope - libcontainer container 994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da. Mar 6 03:02:14.148388 systemd[1]: Started cri-containerd-a51d00c55b0fede88b1d60b2da4f4b374af4a70e5002124de50893b50d07d137.scope - libcontainer container a51d00c55b0fede88b1d60b2da4f4b374af4a70e5002124de50893b50d07d137. Mar 6 03:02:14.154067 systemd[1]: Started cri-containerd-1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b.scope - libcontainer container 1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b. Mar 6 03:02:14.297791 containerd[1974]: time="2026-03-06T03:02:14.297559705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-55,Uid:0cfa6a0d0c6fa188bb3ecea3e8389c17,Namespace:kube-system,Attempt:0,} returns sandbox id \"a51d00c55b0fede88b1d60b2da4f4b374af4a70e5002124de50893b50d07d137\"" Mar 6 03:02:14.298495 containerd[1974]: time="2026-03-06T03:02:14.298461793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-55,Uid:2e239d4be55a5eea1623e024349f60fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da\"" Mar 6 03:02:14.299908 containerd[1974]: time="2026-03-06T03:02:14.299766304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-55,Uid:a2c6cafe7f13dee6667cea467033dec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b\"" Mar 6 03:02:14.313331 containerd[1974]: time="2026-03-06T03:02:14.313289545Z" level=info msg="CreateContainer within sandbox \"1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 03:02:14.314076 containerd[1974]: time="2026-03-06T03:02:14.314034432Z" level=info msg="CreateContainer within sandbox \"a51d00c55b0fede88b1d60b2da4f4b374af4a70e5002124de50893b50d07d137\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 03:02:14.315924 containerd[1974]: time="2026-03-06T03:02:14.315887678Z" level=info msg="CreateContainer within sandbox \"994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 03:02:14.323486 containerd[1974]: time="2026-03-06T03:02:14.323445725Z" level=info msg="Container ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:14.327083 containerd[1974]: time="2026-03-06T03:02:14.326989086Z" level=info msg="Container abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:14.330919 containerd[1974]: time="2026-03-06T03:02:14.330190977Z" level=info msg="Container e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:14.335583 containerd[1974]: time="2026-03-06T03:02:14.335553547Z" level=info msg="CreateContainer within sandbox \"a51d00c55b0fede88b1d60b2da4f4b374af4a70e5002124de50893b50d07d137\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051\"" Mar 6 03:02:14.336558 containerd[1974]: time="2026-03-06T03:02:14.336524432Z" level=info msg="StartContainer for \"abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051\"" Mar 6 03:02:14.339362 containerd[1974]: time="2026-03-06T03:02:14.339324225Z" level=info msg="connecting to shim abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051" address="unix:///run/containerd/s/63a20a023eded515c715469cefc1b3b8057b1feb8f9dc8c6acfb95a083243cf7" protocol=ttrpc version=3 Mar 6 03:02:14.339668 containerd[1974]: time="2026-03-06T03:02:14.339601575Z" level=info msg="CreateContainer within sandbox \"994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5\"" Mar 6 03:02:14.341683 containerd[1974]: time="2026-03-06T03:02:14.341643946Z" level=info msg="StartContainer for \"e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5\"" Mar 6 03:02:14.342407 containerd[1974]: time="2026-03-06T03:02:14.342318966Z" level=info msg="CreateContainer within sandbox \"1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8\"" Mar 6 03:02:14.343682 containerd[1974]: time="2026-03-06T03:02:14.342780677Z" level=info msg="StartContainer for \"ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8\"" Mar 6 03:02:14.345712 containerd[1974]: time="2026-03-06T03:02:14.345643571Z" level=info msg="connecting to shim ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8" address="unix:///run/containerd/s/bbb56e0d9a0f2ba25d419ad16821b5cb3eb4141507f3583ce6cb7f8425ef48e3" protocol=ttrpc version=3 Mar 6 03:02:14.346498 containerd[1974]: time="2026-03-06T03:02:14.346472374Z" level=info msg="connecting to shim e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5" address="unix:///run/containerd/s/58378d68ab07f5b621e250c28790c4122e0f050cd4f5f5ff258caec19af80f02" protocol=ttrpc version=3 Mar 6 03:02:14.375594 systemd[1]: Started cri-containerd-abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051.scope - libcontainer container abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051. Mar 6 03:02:14.388954 systemd[1]: Started cri-containerd-e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5.scope - libcontainer container e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5. Mar 6 03:02:14.400492 systemd[1]: Started cri-containerd-ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8.scope - libcontainer container ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8. Mar 6 03:02:14.473365 kubelet[2945]: E0306 03:02:14.473325 2945 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 03:02:14.488710 containerd[1974]: time="2026-03-06T03:02:14.488614401Z" level=info msg="StartContainer for \"abb2ba6881f7859c0824d16ee06c1b6ca8f474d669e812f15fe5203538c6e051\" returns successfully" Mar 6 03:02:14.533062 containerd[1974]: time="2026-03-06T03:02:14.533002429Z" level=info msg="StartContainer for \"e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5\" returns successfully" Mar 6 03:02:14.539126 kubelet[2945]: E0306 03:02:14.538998 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": dial tcp 172.31.19.55:6443: connect: connection refused" interval="1.6s" Mar 6 03:02:14.549419 containerd[1974]: time="2026-03-06T03:02:14.549257630Z" level=info msg="StartContainer for \"ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8\" returns successfully" Mar 6 03:02:14.719609 kubelet[2945]: I0306 03:02:14.719575 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-55" Mar 6 03:02:15.179683 kubelet[2945]: E0306 03:02:15.179647 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:15.184514 kubelet[2945]: E0306 03:02:15.184482 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:15.188722 kubelet[2945]: E0306 03:02:15.188688 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:16.192277 kubelet[2945]: E0306 03:02:16.192247 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:16.193304 kubelet[2945]: E0306 03:02:16.193282 2945 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:17.573619 kubelet[2945]: E0306 03:02:17.573531 2945 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-55\" not found" node="ip-172-31-19-55" Mar 6 03:02:17.759670 kubelet[2945]: I0306 03:02:17.759630 2945 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-55" Mar 6 03:02:17.759933 kubelet[2945]: E0306 03:02:17.759820 2945 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-55\": node \"ip-172-31-19-55\" not found" Mar 6 03:02:17.820892 kubelet[2945]: I0306 03:02:17.820857 2945 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:17.829067 kubelet[2945]: E0306 03:02:17.828509 2945 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:17.829067 kubelet[2945]: I0306 03:02:17.828548 2945 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:17.831025 kubelet[2945]: E0306 03:02:17.830993 2945 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:17.831162 kubelet[2945]: I0306 03:02:17.831038 2945 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:17.834295 kubelet[2945]: E0306 03:02:17.833310 2945 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:18.105031 kubelet[2945]: I0306 03:02:18.104976 2945 apiserver.go:52] "Watching apiserver" Mar 6 03:02:18.120905 kubelet[2945]: I0306 03:02:18.120858 2945 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 03:02:19.147930 update_engine[1960]: I20260306 03:02:19.147176 1960 update_attempter.cc:509] Updating boot flags... Mar 6 03:02:20.029848 systemd[1]: Reload requested from client PID 3412 ('systemctl') (unit session-9.scope)... Mar 6 03:02:20.029868 systemd[1]: Reloading... Mar 6 03:02:20.160178 zram_generator::config[3456]: No configuration found. Mar 6 03:02:20.430029 systemd[1]: Reloading finished in 399 ms. Mar 6 03:02:20.462159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:20.483805 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 03:02:20.484121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:20.484193 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 119.3M memory peak. Mar 6 03:02:20.488461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:20.712709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:20.726672 (kubelet)[3516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 03:02:20.799462 kubelet[3516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 03:02:20.800158 kubelet[3516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:02:20.800158 kubelet[3516]: I0306 03:02:20.799874 3516 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 03:02:20.808366 kubelet[3516]: I0306 03:02:20.808328 3516 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 03:02:20.808366 kubelet[3516]: I0306 03:02:20.808354 3516 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 03:02:20.808552 kubelet[3516]: I0306 03:02:20.808384 3516 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 03:02:20.808552 kubelet[3516]: I0306 03:02:20.808396 3516 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 03:02:20.808803 kubelet[3516]: I0306 03:02:20.808777 3516 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 03:02:20.812120 kubelet[3516]: I0306 03:02:20.811788 3516 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 03:02:20.815928 kubelet[3516]: I0306 03:02:20.815901 3516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 03:02:20.820379 kubelet[3516]: I0306 03:02:20.820353 3516 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 03:02:20.823584 kubelet[3516]: I0306 03:02:20.823548 3516 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 03:02:20.823839 kubelet[3516]: I0306 03:02:20.823804 3516 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 03:02:20.824012 kubelet[3516]: I0306 03:02:20.823836 3516 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 03:02:20.824171 kubelet[3516]: I0306 03:02:20.824012 3516 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 03:02:20.824171 kubelet[3516]: I0306 03:02:20.824025 3516 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 03:02:20.824171 kubelet[3516]: I0306 03:02:20.824054 3516 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 03:02:20.824318 kubelet[3516]: I0306 03:02:20.824302 3516 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:02:20.824480 kubelet[3516]: I0306 03:02:20.824468 3516 kubelet.go:475] "Attempting to sync node with API server" Mar 6 03:02:20.825156 kubelet[3516]: I0306 03:02:20.824485 3516 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 03:02:20.825156 kubelet[3516]: I0306 03:02:20.824512 3516 kubelet.go:387] "Adding apiserver pod source" Mar 6 03:02:20.825156 kubelet[3516]: I0306 03:02:20.824530 3516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 03:02:20.828273 kubelet[3516]: I0306 03:02:20.828252 3516 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 03:02:20.828937 kubelet[3516]: I0306 03:02:20.828914 3516 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 03:02:20.829015 kubelet[3516]: I0306 03:02:20.828961 3516 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 03:02:20.838712 kubelet[3516]: I0306 03:02:20.838654 3516 server.go:1262] "Started kubelet" Mar 6 03:02:20.841131 kubelet[3516]: I0306 03:02:20.841093 3516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 03:02:20.852854 kubelet[3516]: I0306 03:02:20.852349 3516 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 03:02:20.858428 kubelet[3516]: I0306 03:02:20.858398 3516 server.go:310] "Adding debug handlers to kubelet server" Mar 6 03:02:20.861344 kubelet[3516]: I0306 03:02:20.861290 3516 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 03:02:20.861525 kubelet[3516]: I0306 03:02:20.861511 3516 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 03:02:20.861878 kubelet[3516]: I0306 03:02:20.861858 3516 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 03:02:20.865386 kubelet[3516]: I0306 03:02:20.865362 3516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 03:02:20.869128 kubelet[3516]: I0306 03:02:20.867010 3516 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 03:02:20.869128 kubelet[3516]: I0306 03:02:20.867222 3516 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 03:02:20.869128 kubelet[3516]: I0306 03:02:20.867339 3516 reconciler.go:29] "Reconciler: start to sync state" Mar 6 03:02:20.873692 kubelet[3516]: I0306 03:02:20.872263 3516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 03:02:20.873812 kubelet[3516]: I0306 03:02:20.873703 3516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 03:02:20.873812 kubelet[3516]: I0306 03:02:20.873729 3516 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 03:02:20.873812 kubelet[3516]: I0306 03:02:20.873756 3516 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 03:02:20.873812 kubelet[3516]: E0306 03:02:20.873802 3516 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 03:02:20.884173 kubelet[3516]: I0306 03:02:20.883987 3516 factory.go:223] Registration of the systemd container factory successfully Mar 6 03:02:20.884173 kubelet[3516]: I0306 03:02:20.884165 3516 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 03:02:20.893718 kubelet[3516]: I0306 03:02:20.893551 3516 factory.go:223] Registration of the containerd container factory successfully Mar 6 03:02:20.936891 kubelet[3516]: I0306 03:02:20.936866 3516 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 03:02:20.936891 kubelet[3516]: I0306 03:02:20.936885 3516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 03:02:20.937089 kubelet[3516]: I0306 03:02:20.936908 3516 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:02:20.937089 kubelet[3516]: I0306 03:02:20.937061 3516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 03:02:20.937089 kubelet[3516]: I0306 03:02:20.937073 3516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 03:02:20.937238 kubelet[3516]: I0306 03:02:20.937094 3516 policy_none.go:49] "None policy: Start" Mar 6 03:02:20.937238 kubelet[3516]: I0306 03:02:20.937139 3516 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 03:02:20.937238 kubelet[3516]: I0306 03:02:20.937153 3516 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 03:02:20.937362 kubelet[3516]: I0306 03:02:20.937283 3516 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 03:02:20.937362 kubelet[3516]: I0306 03:02:20.937294 3516 policy_none.go:47] "Start" Mar 6 03:02:20.947851 kubelet[3516]: E0306 03:02:20.946237 3516 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 03:02:20.947851 kubelet[3516]: I0306 03:02:20.946467 3516 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 03:02:20.947851 kubelet[3516]: I0306 03:02:20.946480 3516 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 03:02:20.954182 kubelet[3516]: I0306 03:02:20.953706 3516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 03:02:20.959268 kubelet[3516]: E0306 03:02:20.959243 3516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 03:02:20.974865 kubelet[3516]: I0306 03:02:20.974747 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:20.975551 kubelet[3516]: I0306 03:02:20.975534 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:20.979872 kubelet[3516]: I0306 03:02:20.979740 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:21.060951 kubelet[3516]: I0306 03:02:21.060925 3516 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-55" Mar 6 03:02:21.069156 kubelet[3516]: I0306 03:02:21.069120 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cfa6a0d0c6fa188bb3ecea3e8389c17-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-55\" (UID: \"0cfa6a0d0c6fa188bb3ecea3e8389c17\") " pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:21.069588 kubelet[3516]: I0306 03:02:21.069335 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2c6cafe7f13dee6667cea467033dec0-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-55\" (UID: \"a2c6cafe7f13dee6667cea467033dec0\") " pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:21.069588 kubelet[3516]: I0306 03:02:21.069367 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cfa6a0d0c6fa188bb3ecea3e8389c17-ca-certs\") pod \"kube-apiserver-ip-172-31-19-55\" (UID: \"0cfa6a0d0c6fa188bb3ecea3e8389c17\") " pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:21.069588 kubelet[3516]: I0306 03:02:21.069395 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:21.069588 kubelet[3516]: I0306 03:02:21.069421 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:21.069588 kubelet[3516]: I0306 03:02:21.069443 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:21.069844 kubelet[3516]: I0306 03:02:21.069465 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:21.069844 kubelet[3516]: I0306 03:02:21.069487 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e239d4be55a5eea1623e024349f60fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-55\" (UID: \"2e239d4be55a5eea1623e024349f60fc\") " pod="kube-system/kube-controller-manager-ip-172-31-19-55" Mar 6 03:02:21.069844 kubelet[3516]: I0306 03:02:21.069510 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cfa6a0d0c6fa188bb3ecea3e8389c17-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-55\" (UID: \"0cfa6a0d0c6fa188bb3ecea3e8389c17\") " pod="kube-system/kube-apiserver-ip-172-31-19-55" Mar 6 03:02:21.073116 kubelet[3516]: I0306 03:02:21.072944 3516 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-55" Mar 6 03:02:21.073116 kubelet[3516]: I0306 03:02:21.073076 3516 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-55" Mar 6 03:02:21.825406 kubelet[3516]: I0306 03:02:21.825335 3516 apiserver.go:52] "Watching apiserver" Mar 6 03:02:21.868448 kubelet[3516]: I0306 03:02:21.867588 3516 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 03:02:21.881786 kubelet[3516]: I0306 03:02:21.881718 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-55" podStartSLOduration=1.880396555 podStartE2EDuration="1.880396555s" podCreationTimestamp="2026-03-06 03:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:21.879730297 +0000 UTC m=+1.147796795" watchObservedRunningTime="2026-03-06 03:02:21.880396555 +0000 UTC m=+1.148463030" Mar 6 03:02:21.907030 kubelet[3516]: I0306 03:02:21.906757 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-55" podStartSLOduration=1.906737927 podStartE2EDuration="1.906737927s" podCreationTimestamp="2026-03-06 03:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:21.892191177 +0000 UTC m=+1.160257670" watchObservedRunningTime="2026-03-06 03:02:21.906737927 +0000 UTC m=+1.174804424" Mar 6 03:02:21.920976 kubelet[3516]: I0306 03:02:21.920948 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:21.924451 kubelet[3516]: I0306 03:02:21.924386 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-55" podStartSLOduration=1.924369174 podStartE2EDuration="1.924369174s" podCreationTimestamp="2026-03-06 03:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:21.907512778 +0000 UTC m=+1.175579275" watchObservedRunningTime="2026-03-06 03:02:21.924369174 +0000 UTC m=+1.192435673" Mar 6 03:02:21.932507 kubelet[3516]: E0306 03:02:21.932472 3516 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-55\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-55" Mar 6 03:02:26.565876 kubelet[3516]: I0306 03:02:26.565826 3516 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 03:02:26.566583 containerd[1974]: time="2026-03-06T03:02:26.566543334Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 03:02:26.566986 kubelet[3516]: I0306 03:02:26.566938 3516 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 03:02:27.570546 systemd[1]: Created slice kubepods-besteffort-pod5534cd24_fe5f_4358_aa9f_f0d725aee83a.slice - libcontainer container kubepods-besteffort-pod5534cd24_fe5f_4358_aa9f_f0d725aee83a.slice. Mar 6 03:02:27.615347 kubelet[3516]: I0306 03:02:27.615237 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5534cd24-fe5f-4358-aa9f-f0d725aee83a-kube-proxy\") pod \"kube-proxy-9b949\" (UID: \"5534cd24-fe5f-4358-aa9f-f0d725aee83a\") " pod="kube-system/kube-proxy-9b949" Mar 6 03:02:27.616048 kubelet[3516]: I0306 03:02:27.615908 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5534cd24-fe5f-4358-aa9f-f0d725aee83a-lib-modules\") pod \"kube-proxy-9b949\" (UID: \"5534cd24-fe5f-4358-aa9f-f0d725aee83a\") " pod="kube-system/kube-proxy-9b949" Mar 6 03:02:27.616048 kubelet[3516]: I0306 03:02:27.615952 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5534cd24-fe5f-4358-aa9f-f0d725aee83a-xtables-lock\") pod \"kube-proxy-9b949\" (UID: \"5534cd24-fe5f-4358-aa9f-f0d725aee83a\") " pod="kube-system/kube-proxy-9b949" Mar 6 03:02:27.616048 kubelet[3516]: I0306 03:02:27.615980 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjdl\" (UniqueName: \"kubernetes.io/projected/5534cd24-fe5f-4358-aa9f-f0d725aee83a-kube-api-access-hkjdl\") pod \"kube-proxy-9b949\" (UID: \"5534cd24-fe5f-4358-aa9f-f0d725aee83a\") " pod="kube-system/kube-proxy-9b949" Mar 6 03:02:27.797618 systemd[1]: Created slice kubepods-besteffort-pod5bf10edb_fa26_401a_b3a9_f4fc5d38aadd.slice - libcontainer container kubepods-besteffort-pod5bf10edb_fa26_401a_b3a9_f4fc5d38aadd.slice. Mar 6 03:02:27.818728 kubelet[3516]: I0306 03:02:27.818660 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5bf10edb-fa26-401a-b3a9-f4fc5d38aadd-var-lib-calico\") pod \"tigera-operator-5588576f44-pwn58\" (UID: \"5bf10edb-fa26-401a-b3a9-f4fc5d38aadd\") " pod="tigera-operator/tigera-operator-5588576f44-pwn58" Mar 6 03:02:27.818728 kubelet[3516]: I0306 03:02:27.818730 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hll\" (UniqueName: \"kubernetes.io/projected/5bf10edb-fa26-401a-b3a9-f4fc5d38aadd-kube-api-access-j6hll\") pod \"tigera-operator-5588576f44-pwn58\" (UID: \"5bf10edb-fa26-401a-b3a9-f4fc5d38aadd\") " pod="tigera-operator/tigera-operator-5588576f44-pwn58" Mar 6 03:02:27.883391 containerd[1974]: time="2026-03-06T03:02:27.883342601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9b949,Uid:5534cd24-fe5f-4358-aa9f-f0d725aee83a,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:27.940530 containerd[1974]: time="2026-03-06T03:02:27.940471798Z" level=info msg="connecting to shim f8f25434c0fc88e6b48dcfeb70a18c2e41a8bda49fb0a3d4faa8b48ce78d4efe" address="unix:///run/containerd/s/53b9bb4d5764021d7e90a62ad833e94d964fe2f5c8c29e188e5b78d96bd021e0" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:27.972280 systemd[1]: Started cri-containerd-f8f25434c0fc88e6b48dcfeb70a18c2e41a8bda49fb0a3d4faa8b48ce78d4efe.scope - libcontainer container f8f25434c0fc88e6b48dcfeb70a18c2e41a8bda49fb0a3d4faa8b48ce78d4efe. Mar 6 03:02:28.023390 containerd[1974]: time="2026-03-06T03:02:28.023334137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9b949,Uid:5534cd24-fe5f-4358-aa9f-f0d725aee83a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8f25434c0fc88e6b48dcfeb70a18c2e41a8bda49fb0a3d4faa8b48ce78d4efe\"" Mar 6 03:02:28.031492 containerd[1974]: time="2026-03-06T03:02:28.031434535Z" level=info msg="CreateContainer within sandbox \"f8f25434c0fc88e6b48dcfeb70a18c2e41a8bda49fb0a3d4faa8b48ce78d4efe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 03:02:28.045128 containerd[1974]: time="2026-03-06T03:02:28.042196972Z" level=info msg="Container ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:28.051700 containerd[1974]: time="2026-03-06T03:02:28.051652710Z" level=info msg="CreateContainer within sandbox \"f8f25434c0fc88e6b48dcfeb70a18c2e41a8bda49fb0a3d4faa8b48ce78d4efe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112\"" Mar 6 03:02:28.053658 containerd[1974]: time="2026-03-06T03:02:28.052389191Z" level=info msg="StartContainer for \"ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112\"" Mar 6 03:02:28.054492 containerd[1974]: time="2026-03-06T03:02:28.054457498Z" level=info msg="connecting to shim ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112" address="unix:///run/containerd/s/53b9bb4d5764021d7e90a62ad833e94d964fe2f5c8c29e188e5b78d96bd021e0" protocol=ttrpc version=3 Mar 6 03:02:28.078336 systemd[1]: Started cri-containerd-ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112.scope - libcontainer container ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112. Mar 6 03:02:28.105318 containerd[1974]: time="2026-03-06T03:02:28.105255482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-pwn58,Uid:5bf10edb-fa26-401a-b3a9-f4fc5d38aadd,Namespace:tigera-operator,Attempt:0,}" Mar 6 03:02:28.135450 containerd[1974]: time="2026-03-06T03:02:28.135337620Z" level=info msg="connecting to shim dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1" address="unix:///run/containerd/s/6d734ab45c3e833e3b88a265ae6ecff0d9f5330835e62fc78d64a890266bdac5" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:28.185840 systemd[1]: Started cri-containerd-dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1.scope - libcontainer container dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1. Mar 6 03:02:28.194351 containerd[1974]: time="2026-03-06T03:02:28.194058389Z" level=info msg="StartContainer for \"ce89d3a9c96467b23c0561c67ca5bd1a6fc08aee5d91b68af5df2623c9862112\" returns successfully" Mar 6 03:02:28.269629 containerd[1974]: time="2026-03-06T03:02:28.269583368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-pwn58,Uid:5bf10edb-fa26-401a-b3a9-f4fc5d38aadd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1\"" Mar 6 03:02:28.273516 containerd[1974]: time="2026-03-06T03:02:28.273481270Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 03:02:28.948868 kubelet[3516]: I0306 03:02:28.948693 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9b949" podStartSLOduration=1.948670033 podStartE2EDuration="1.948670033s" podCreationTimestamp="2026-03-06 03:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:28.947981196 +0000 UTC m=+8.216047697" watchObservedRunningTime="2026-03-06 03:02:28.948670033 +0000 UTC m=+8.216736536" Mar 6 03:02:29.552431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203495712.mount: Deactivated successfully. Mar 6 03:02:34.711644 containerd[1974]: time="2026-03-06T03:02:34.711587256Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:34.713355 containerd[1974]: time="2026-03-06T03:02:34.713296389Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 03:02:34.715138 containerd[1974]: time="2026-03-06T03:02:34.715059623Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:34.718020 containerd[1974]: time="2026-03-06T03:02:34.717958675Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:34.718831 containerd[1974]: time="2026-03-06T03:02:34.718638996Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 6.444891464s" Mar 6 03:02:34.718831 containerd[1974]: time="2026-03-06T03:02:34.718680231Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 03:02:34.724773 containerd[1974]: time="2026-03-06T03:02:34.724723996Z" level=info msg="CreateContainer within sandbox \"dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 03:02:34.754385 containerd[1974]: time="2026-03-06T03:02:34.754334013Z" level=info msg="Container e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:34.761489 containerd[1974]: time="2026-03-06T03:02:34.761445439Z" level=info msg="CreateContainer within sandbox \"dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\"" Mar 6 03:02:34.763453 containerd[1974]: time="2026-03-06T03:02:34.763409754Z" level=info msg="StartContainer for \"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\"" Mar 6 03:02:34.764463 containerd[1974]: time="2026-03-06T03:02:34.764427330Z" level=info msg="connecting to shim e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc" address="unix:///run/containerd/s/6d734ab45c3e833e3b88a265ae6ecff0d9f5330835e62fc78d64a890266bdac5" protocol=ttrpc version=3 Mar 6 03:02:34.790310 systemd[1]: Started cri-containerd-e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc.scope - libcontainer container e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc. Mar 6 03:02:34.827223 containerd[1974]: time="2026-03-06T03:02:34.827181619Z" level=info msg="StartContainer for \"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\" returns successfully" Mar 6 03:02:34.962946 kubelet[3516]: I0306 03:02:34.962685 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-pwn58" podStartSLOduration=1.515233014 podStartE2EDuration="7.962551383s" podCreationTimestamp="2026-03-06 03:02:27 +0000 UTC" firstStartedPulling="2026-03-06 03:02:28.27257651 +0000 UTC m=+7.540643006" lastFinishedPulling="2026-03-06 03:02:34.719894886 +0000 UTC m=+13.987961375" observedRunningTime="2026-03-06 03:02:34.962415486 +0000 UTC m=+14.230481984" watchObservedRunningTime="2026-03-06 03:02:34.962551383 +0000 UTC m=+14.230617881" Mar 6 03:02:41.944391 sudo[2363]: pam_unix(sudo:session): session closed for user root Mar 6 03:02:42.026130 sshd[2362]: Connection closed by 68.220.241.50 port 38976 Mar 6 03:02:42.027272 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:42.037060 systemd[1]: sshd@8-172.31.19.55:22-68.220.241.50:38976.service: Deactivated successfully. Mar 6 03:02:42.043583 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 03:02:42.047511 systemd[1]: session-9.scope: Consumed 6.153s CPU time, 171M memory peak. Mar 6 03:02:42.052423 systemd-logind[1959]: Session 9 logged out. Waiting for processes to exit. Mar 6 03:02:42.057555 systemd-logind[1959]: Removed session 9. Mar 6 03:02:43.038793 systemd[1]: Created slice kubepods-besteffort-pode5618422_7d1e_4a7a_8ec8_8a8eb21fa297.slice - libcontainer container kubepods-besteffort-pode5618422_7d1e_4a7a_8ec8_8a8eb21fa297.slice. Mar 6 03:02:43.126335 kubelet[3516]: I0306 03:02:43.126139 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8sw\" (UniqueName: \"kubernetes.io/projected/e5618422-7d1e-4a7a-8ec8-8a8eb21fa297-kube-api-access-fb8sw\") pod \"calico-typha-54ccc84cb6-z4vc8\" (UID: \"e5618422-7d1e-4a7a-8ec8-8a8eb21fa297\") " pod="calico-system/calico-typha-54ccc84cb6-z4vc8" Mar 6 03:02:43.126335 kubelet[3516]: I0306 03:02:43.126205 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5618422-7d1e-4a7a-8ec8-8a8eb21fa297-typha-certs\") pod \"calico-typha-54ccc84cb6-z4vc8\" (UID: \"e5618422-7d1e-4a7a-8ec8-8a8eb21fa297\") " pod="calico-system/calico-typha-54ccc84cb6-z4vc8" Mar 6 03:02:43.126335 kubelet[3516]: I0306 03:02:43.126251 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5618422-7d1e-4a7a-8ec8-8a8eb21fa297-tigera-ca-bundle\") pod \"calico-typha-54ccc84cb6-z4vc8\" (UID: \"e5618422-7d1e-4a7a-8ec8-8a8eb21fa297\") " pod="calico-system/calico-typha-54ccc84cb6-z4vc8" Mar 6 03:02:43.157163 systemd[1]: Created slice kubepods-besteffort-podbfbdded1_7152_4ed4_b36c_ce2380a29231.slice - libcontainer container kubepods-besteffort-podbfbdded1_7152_4ed4_b36c_ce2380a29231.slice. Mar 6 03:02:43.227234 kubelet[3516]: I0306 03:02:43.227197 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-cni-log-dir\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227501 kubelet[3516]: I0306 03:02:43.227245 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-nodeproc\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227501 kubelet[3516]: I0306 03:02:43.227270 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-xtables-lock\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227501 kubelet[3516]: I0306 03:02:43.227292 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-cni-bin-dir\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227501 kubelet[3516]: I0306 03:02:43.227311 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-cni-net-dir\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227501 kubelet[3516]: I0306 03:02:43.227333 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bfbdded1-7152-4ed4-b36c-ce2380a29231-node-certs\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227735 kubelet[3516]: I0306 03:02:43.227354 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-policysync\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227735 kubelet[3516]: I0306 03:02:43.227398 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-var-lib-calico\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227735 kubelet[3516]: I0306 03:02:43.227425 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbdded1-7152-4ed4-b36c-ce2380a29231-tigera-ca-bundle\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227735 kubelet[3516]: I0306 03:02:43.227463 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlm6q\" (UniqueName: \"kubernetes.io/projected/bfbdded1-7152-4ed4-b36c-ce2380a29231-kube-api-access-jlm6q\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227735 kubelet[3516]: I0306 03:02:43.227527 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-flexvol-driver-host\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227934 kubelet[3516]: I0306 03:02:43.227552 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-sys-fs\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227934 kubelet[3516]: I0306 03:02:43.227588 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-bpffs\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227934 kubelet[3516]: I0306 03:02:43.227612 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-lib-modules\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.227934 kubelet[3516]: I0306 03:02:43.227639 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bfbdded1-7152-4ed4-b36c-ce2380a29231-var-run-calico\") pod \"calico-node-pb8t4\" (UID: \"bfbdded1-7152-4ed4-b36c-ce2380a29231\") " pod="calico-system/calico-node-pb8t4" Mar 6 03:02:43.244127 kubelet[3516]: E0306 03:02:43.240813 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:43.330001 kubelet[3516]: I0306 03:02:43.328295 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b6ac159-4573-487d-821b-a2953dcf4b25-registration-dir\") pod \"csi-node-driver-5wnzw\" (UID: \"3b6ac159-4573-487d-821b-a2953dcf4b25\") " pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:02:43.330001 kubelet[3516]: I0306 03:02:43.328389 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b6ac159-4573-487d-821b-a2953dcf4b25-socket-dir\") pod \"csi-node-driver-5wnzw\" (UID: \"3b6ac159-4573-487d-821b-a2953dcf4b25\") " pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:02:43.330001 kubelet[3516]: I0306 03:02:43.328440 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b6ac159-4573-487d-821b-a2953dcf4b25-kubelet-dir\") pod \"csi-node-driver-5wnzw\" (UID: \"3b6ac159-4573-487d-821b-a2953dcf4b25\") " pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:02:43.330001 kubelet[3516]: I0306 03:02:43.328601 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86k5z\" (UniqueName: \"kubernetes.io/projected/3b6ac159-4573-487d-821b-a2953dcf4b25-kube-api-access-86k5z\") pod \"csi-node-driver-5wnzw\" (UID: \"3b6ac159-4573-487d-821b-a2953dcf4b25\") " pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:02:43.330001 kubelet[3516]: I0306 03:02:43.328721 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3b6ac159-4573-487d-821b-a2953dcf4b25-varrun\") pod \"csi-node-driver-5wnzw\" (UID: \"3b6ac159-4573-487d-821b-a2953dcf4b25\") " pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:02:43.341233 kubelet[3516]: E0306 03:02:43.341205 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.341398 kubelet[3516]: W0306 03:02:43.341382 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.344184 kubelet[3516]: E0306 03:02:43.344160 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.354056 kubelet[3516]: E0306 03:02:43.353212 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.354056 kubelet[3516]: W0306 03:02:43.353240 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.354056 kubelet[3516]: E0306 03:02:43.353264 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.359063 containerd[1974]: time="2026-03-06T03:02:43.359019156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54ccc84cb6-z4vc8,Uid:e5618422-7d1e-4a7a-8ec8-8a8eb21fa297,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:43.386127 containerd[1974]: time="2026-03-06T03:02:43.386055103Z" level=info msg="connecting to shim e7eda51e7f6bdc3b1a58f344d3c989fdc4bccc13804be03a96baf7111c796d64" address="unix:///run/containerd/s/628917a599ae735e62000fb7789b024b183fb05bde04b5f5c67cd2f9962b7c19" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:43.415327 systemd[1]: Started cri-containerd-e7eda51e7f6bdc3b1a58f344d3c989fdc4bccc13804be03a96baf7111c796d64.scope - libcontainer container e7eda51e7f6bdc3b1a58f344d3c989fdc4bccc13804be03a96baf7111c796d64. Mar 6 03:02:43.429529 kubelet[3516]: E0306 03:02:43.429494 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.429529 kubelet[3516]: W0306 03:02:43.429526 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.429750 kubelet[3516]: E0306 03:02:43.429548 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.429993 kubelet[3516]: E0306 03:02:43.429824 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.429993 kubelet[3516]: W0306 03:02:43.429837 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.429993 kubelet[3516]: E0306 03:02:43.429851 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.430517 kubelet[3516]: E0306 03:02:43.430119 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.430517 kubelet[3516]: W0306 03:02:43.430130 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.430517 kubelet[3516]: E0306 03:02:43.430143 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.430517 kubelet[3516]: E0306 03:02:43.430418 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.430517 kubelet[3516]: W0306 03:02:43.430447 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.430517 kubelet[3516]: E0306 03:02:43.430459 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.431360 kubelet[3516]: E0306 03:02:43.430765 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.431360 kubelet[3516]: W0306 03:02:43.430775 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.431360 kubelet[3516]: E0306 03:02:43.430788 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.431360 kubelet[3516]: E0306 03:02:43.431010 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.431360 kubelet[3516]: W0306 03:02:43.431020 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.431360 kubelet[3516]: E0306 03:02:43.431032 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.431360 kubelet[3516]: E0306 03:02:43.431310 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.431360 kubelet[3516]: W0306 03:02:43.431320 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.431360 kubelet[3516]: E0306 03:02:43.431335 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.431981 kubelet[3516]: E0306 03:02:43.431667 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.431981 kubelet[3516]: W0306 03:02:43.431677 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.431981 kubelet[3516]: E0306 03:02:43.431690 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.432316 kubelet[3516]: E0306 03:02:43.432293 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.432316 kubelet[3516]: W0306 03:02:43.432309 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.432685 kubelet[3516]: E0306 03:02:43.432323 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.432777 kubelet[3516]: E0306 03:02:43.432688 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.432777 kubelet[3516]: W0306 03:02:43.432700 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.432777 kubelet[3516]: E0306 03:02:43.432713 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.433006 kubelet[3516]: E0306 03:02:43.432964 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.433006 kubelet[3516]: W0306 03:02:43.432975 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.433006 kubelet[3516]: E0306 03:02:43.432987 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.433335 kubelet[3516]: E0306 03:02:43.433241 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.433335 kubelet[3516]: W0306 03:02:43.433251 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.433335 kubelet[3516]: E0306 03:02:43.433263 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.433728 kubelet[3516]: E0306 03:02:43.433558 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.433728 kubelet[3516]: W0306 03:02:43.433569 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.433728 kubelet[3516]: E0306 03:02:43.433582 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.434054 kubelet[3516]: E0306 03:02:43.434024 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.434054 kubelet[3516]: W0306 03:02:43.434036 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.434054 kubelet[3516]: E0306 03:02:43.434049 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.435252 kubelet[3516]: E0306 03:02:43.435221 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.435371 kubelet[3516]: W0306 03:02:43.435324 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.435371 kubelet[3516]: E0306 03:02:43.435344 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.435779 kubelet[3516]: E0306 03:02:43.435737 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.435779 kubelet[3516]: W0306 03:02:43.435751 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.435779 kubelet[3516]: E0306 03:02:43.435765 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.437074 kubelet[3516]: E0306 03:02:43.436864 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.437074 kubelet[3516]: W0306 03:02:43.436878 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.437074 kubelet[3516]: E0306 03:02:43.436892 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.437618 kubelet[3516]: E0306 03:02:43.437296 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.437618 kubelet[3516]: W0306 03:02:43.437309 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.437618 kubelet[3516]: E0306 03:02:43.437323 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.437618 kubelet[3516]: E0306 03:02:43.437509 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.437618 kubelet[3516]: W0306 03:02:43.437518 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.437618 kubelet[3516]: E0306 03:02:43.437530 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.439797 kubelet[3516]: E0306 03:02:43.438132 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.439797 kubelet[3516]: W0306 03:02:43.438146 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.439797 kubelet[3516]: E0306 03:02:43.438159 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.439797 kubelet[3516]: E0306 03:02:43.438384 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.439797 kubelet[3516]: W0306 03:02:43.438395 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.439797 kubelet[3516]: E0306 03:02:43.438407 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.441746 kubelet[3516]: E0306 03:02:43.440786 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.441872 kubelet[3516]: W0306 03:02:43.441856 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.441975 kubelet[3516]: E0306 03:02:43.441962 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.442660 kubelet[3516]: E0306 03:02:43.442646 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.442979 kubelet[3516]: W0306 03:02:43.442885 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.442979 kubelet[3516]: E0306 03:02:43.442908 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.444163 kubelet[3516]: E0306 03:02:43.443863 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.446150 kubelet[3516]: W0306 03:02:43.444249 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.446150 kubelet[3516]: E0306 03:02:43.444268 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.448303 kubelet[3516]: E0306 03:02:43.448285 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.448377 kubelet[3516]: W0306 03:02:43.448303 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.448377 kubelet[3516]: E0306 03:02:43.448319 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.458841 kubelet[3516]: E0306 03:02:43.458621 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:43.458841 kubelet[3516]: W0306 03:02:43.458643 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:43.458841 kubelet[3516]: E0306 03:02:43.458665 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:43.476412 containerd[1974]: time="2026-03-06T03:02:43.473091540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pb8t4,Uid:bfbdded1-7152-4ed4-b36c-ce2380a29231,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:43.510679 containerd[1974]: time="2026-03-06T03:02:43.510641320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54ccc84cb6-z4vc8,Uid:e5618422-7d1e-4a7a-8ec8-8a8eb21fa297,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7eda51e7f6bdc3b1a58f344d3c989fdc4bccc13804be03a96baf7111c796d64\"" Mar 6 03:02:43.514449 containerd[1974]: time="2026-03-06T03:02:43.514369652Z" level=info msg="connecting to shim 75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707" address="unix:///run/containerd/s/559948fd284016f3220100263c325dc634616fff57ec5819fcc67506b1125bd7" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:43.514723 containerd[1974]: time="2026-03-06T03:02:43.514647136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 03:02:43.553429 systemd[1]: Started cri-containerd-75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707.scope - libcontainer container 75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707. Mar 6 03:02:43.588323 containerd[1974]: time="2026-03-06T03:02:43.587263863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pb8t4,Uid:bfbdded1-7152-4ed4-b36c-ce2380a29231,Namespace:calico-system,Attempt:0,} returns sandbox id \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\"" Mar 6 03:02:44.875930 kubelet[3516]: E0306 03:02:44.874135 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:46.874817 kubelet[3516]: E0306 03:02:46.874751 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:47.482002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688100606.mount: Deactivated successfully. Mar 6 03:02:48.875705 kubelet[3516]: E0306 03:02:48.875332 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:49.049270 containerd[1974]: time="2026-03-06T03:02:49.049217281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:49.072081 containerd[1974]: time="2026-03-06T03:02:49.051505571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 03:02:49.072081 containerd[1974]: time="2026-03-06T03:02:49.054364958Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:49.074422 containerd[1974]: time="2026-03-06T03:02:49.058356414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 5.543674476s" Mar 6 03:02:49.074422 containerd[1974]: time="2026-03-06T03:02:49.073208459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 03:02:49.081828 containerd[1974]: time="2026-03-06T03:02:49.081786032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:49.083276 containerd[1974]: time="2026-03-06T03:02:49.083242760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 03:02:49.110660 containerd[1974]: time="2026-03-06T03:02:49.110619220Z" level=info msg="CreateContainer within sandbox \"e7eda51e7f6bdc3b1a58f344d3c989fdc4bccc13804be03a96baf7111c796d64\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 03:02:49.120244 containerd[1974]: time="2026-03-06T03:02:49.119051691Z" level=info msg="Container 9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:49.132066 containerd[1974]: time="2026-03-06T03:02:49.131952848Z" level=info msg="CreateContainer within sandbox \"e7eda51e7f6bdc3b1a58f344d3c989fdc4bccc13804be03a96baf7111c796d64\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3\"" Mar 6 03:02:49.132910 containerd[1974]: time="2026-03-06T03:02:49.132885206Z" level=info msg="StartContainer for \"9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3\"" Mar 6 03:02:49.135446 containerd[1974]: time="2026-03-06T03:02:49.135408152Z" level=info msg="connecting to shim 9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3" address="unix:///run/containerd/s/628917a599ae735e62000fb7789b024b183fb05bde04b5f5c67cd2f9962b7c19" protocol=ttrpc version=3 Mar 6 03:02:49.164354 systemd[1]: Started cri-containerd-9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3.scope - libcontainer container 9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3. Mar 6 03:02:49.233145 containerd[1974]: time="2026-03-06T03:02:49.233072486Z" level=info msg="StartContainer for \"9b9cd3c75cc656ae4a22b79b1d88a1d101a4c048d641682c91d6168baa3c8ce3\" returns successfully" Mar 6 03:02:50.057801 kubelet[3516]: E0306 03:02:50.057767 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.057801 kubelet[3516]: W0306 03:02:50.057792 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.059293 kubelet[3516]: E0306 03:02:50.059255 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.059611 kubelet[3516]: E0306 03:02:50.059591 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.059611 kubelet[3516]: W0306 03:02:50.059609 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.059763 kubelet[3516]: E0306 03:02:50.059628 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.060349 kubelet[3516]: E0306 03:02:50.060182 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.060349 kubelet[3516]: W0306 03:02:50.060197 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.060349 kubelet[3516]: E0306 03:02:50.060214 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.060759 kubelet[3516]: E0306 03:02:50.060733 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.060853 kubelet[3516]: W0306 03:02:50.060827 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.060853 kubelet[3516]: E0306 03:02:50.060851 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.061454 kubelet[3516]: E0306 03:02:50.061433 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.061454 kubelet[3516]: W0306 03:02:50.061448 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.061603 kubelet[3516]: E0306 03:02:50.061465 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.061652 kubelet[3516]: E0306 03:02:50.061641 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.061698 kubelet[3516]: W0306 03:02:50.061650 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.061698 kubelet[3516]: E0306 03:02:50.061663 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.061851 kubelet[3516]: E0306 03:02:50.061827 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.061851 kubelet[3516]: W0306 03:02:50.061840 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.062009 kubelet[3516]: E0306 03:02:50.061852 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.062075 kubelet[3516]: E0306 03:02:50.062029 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.062075 kubelet[3516]: W0306 03:02:50.062039 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.062075 kubelet[3516]: E0306 03:02:50.062052 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.062326 kubelet[3516]: E0306 03:02:50.062279 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.062326 kubelet[3516]: W0306 03:02:50.062290 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.062326 kubelet[3516]: E0306 03:02:50.062305 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.063240 kubelet[3516]: E0306 03:02:50.063208 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.063240 kubelet[3516]: W0306 03:02:50.063225 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.063240 kubelet[3516]: E0306 03:02:50.063239 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.063463 kubelet[3516]: E0306 03:02:50.063442 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.063463 kubelet[3516]: W0306 03:02:50.063458 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.063682 kubelet[3516]: E0306 03:02:50.063475 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.063682 kubelet[3516]: E0306 03:02:50.063676 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.063803 kubelet[3516]: W0306 03:02:50.063711 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.063803 kubelet[3516]: E0306 03:02:50.063725 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.064786 kubelet[3516]: E0306 03:02:50.064764 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.064786 kubelet[3516]: W0306 03:02:50.064783 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.064940 kubelet[3516]: E0306 03:02:50.064799 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.065400 kubelet[3516]: E0306 03:02:50.065368 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.065400 kubelet[3516]: W0306 03:02:50.065383 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.065400 kubelet[3516]: E0306 03:02:50.065397 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.065639 kubelet[3516]: E0306 03:02:50.065590 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.065639 kubelet[3516]: W0306 03:02:50.065601 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.065639 kubelet[3516]: E0306 03:02:50.065614 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.088293 kubelet[3516]: E0306 03:02:50.088250 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.088293 kubelet[3516]: W0306 03:02:50.088277 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.088857 kubelet[3516]: E0306 03:02:50.088302 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.088857 kubelet[3516]: E0306 03:02:50.088842 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.088857 kubelet[3516]: W0306 03:02:50.088857 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.089183 kubelet[3516]: E0306 03:02:50.088872 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.089183 kubelet[3516]: E0306 03:02:50.089175 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.089278 kubelet[3516]: W0306 03:02:50.089187 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.089278 kubelet[3516]: E0306 03:02:50.089205 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.089487 kubelet[3516]: E0306 03:02:50.089466 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.089487 kubelet[3516]: W0306 03:02:50.089483 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.089581 kubelet[3516]: E0306 03:02:50.089497 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.089755 kubelet[3516]: E0306 03:02:50.089737 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.089755 kubelet[3516]: W0306 03:02:50.089751 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.089867 kubelet[3516]: E0306 03:02:50.089764 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.090059 kubelet[3516]: E0306 03:02:50.090042 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.090059 kubelet[3516]: W0306 03:02:50.090056 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.090194 kubelet[3516]: E0306 03:02:50.090069 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.090391 kubelet[3516]: E0306 03:02:50.090369 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.090705 kubelet[3516]: W0306 03:02:50.090677 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.090705 kubelet[3516]: E0306 03:02:50.090701 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.090975 kubelet[3516]: E0306 03:02:50.090957 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.090975 kubelet[3516]: W0306 03:02:50.090972 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.091080 kubelet[3516]: E0306 03:02:50.090985 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.091276 kubelet[3516]: E0306 03:02:50.091259 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.091276 kubelet[3516]: W0306 03:02:50.091272 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.091389 kubelet[3516]: E0306 03:02:50.091285 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.091557 kubelet[3516]: E0306 03:02:50.091541 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.091557 kubelet[3516]: W0306 03:02:50.091554 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.091653 kubelet[3516]: E0306 03:02:50.091566 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.091826 kubelet[3516]: E0306 03:02:50.091810 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.091826 kubelet[3516]: W0306 03:02:50.091823 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.091930 kubelet[3516]: E0306 03:02:50.091836 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.092088 kubelet[3516]: E0306 03:02:50.092071 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.092088 kubelet[3516]: W0306 03:02:50.092085 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.092217 kubelet[3516]: E0306 03:02:50.092115 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.092390 kubelet[3516]: E0306 03:02:50.092373 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.092390 kubelet[3516]: W0306 03:02:50.092388 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.092607 kubelet[3516]: E0306 03:02:50.092401 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.092775 kubelet[3516]: E0306 03:02:50.092754 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.092775 kubelet[3516]: W0306 03:02:50.092768 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.092872 kubelet[3516]: E0306 03:02:50.092782 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.093095 kubelet[3516]: E0306 03:02:50.093078 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.093095 kubelet[3516]: W0306 03:02:50.093092 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.093249 kubelet[3516]: E0306 03:02:50.093131 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.093425 kubelet[3516]: E0306 03:02:50.093408 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.093425 kubelet[3516]: W0306 03:02:50.093422 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.093517 kubelet[3516]: E0306 03:02:50.093434 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.093708 kubelet[3516]: E0306 03:02:50.093690 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.093708 kubelet[3516]: W0306 03:02:50.093703 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.093822 kubelet[3516]: E0306 03:02:50.093715 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.094072 kubelet[3516]: E0306 03:02:50.094056 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:50.094072 kubelet[3516]: W0306 03:02:50.094069 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:50.094200 kubelet[3516]: E0306 03:02:50.094082 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:50.347390 containerd[1974]: time="2026-03-06T03:02:50.346726412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:50.348385 containerd[1974]: time="2026-03-06T03:02:50.348352320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 03:02:50.351774 containerd[1974]: time="2026-03-06T03:02:50.350548323Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:50.356161 containerd[1974]: time="2026-03-06T03:02:50.356073807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.27279085s" Mar 6 03:02:50.358473 containerd[1974]: time="2026-03-06T03:02:50.357876947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 03:02:50.360123 containerd[1974]: time="2026-03-06T03:02:50.360077813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:50.366228 containerd[1974]: time="2026-03-06T03:02:50.366171352Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 03:02:50.385710 containerd[1974]: time="2026-03-06T03:02:50.383284329Z" level=info msg="Container df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:50.406954 containerd[1974]: time="2026-03-06T03:02:50.406904828Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce\"" Mar 6 03:02:50.407721 containerd[1974]: time="2026-03-06T03:02:50.407692852Z" level=info msg="StartContainer for \"df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce\"" Mar 6 03:02:50.410120 containerd[1974]: time="2026-03-06T03:02:50.410048789Z" level=info msg="connecting to shim df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce" address="unix:///run/containerd/s/559948fd284016f3220100263c325dc634616fff57ec5819fcc67506b1125bd7" protocol=ttrpc version=3 Mar 6 03:02:50.440318 systemd[1]: Started cri-containerd-df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce.scope - libcontainer container df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce. Mar 6 03:02:50.535292 containerd[1974]: time="2026-03-06T03:02:50.535247037Z" level=info msg="StartContainer for \"df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce\" returns successfully" Mar 6 03:02:50.543408 systemd[1]: cri-containerd-df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce.scope: Deactivated successfully. Mar 6 03:02:50.576628 containerd[1974]: time="2026-03-06T03:02:50.576574776Z" level=info msg="received container exit event container_id:\"df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce\" id:\"df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce\" pid:4142 exited_at:{seconds:1772766170 nanos:548991835}" Mar 6 03:02:50.611988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df8859640d5be38efa0f9e761234f81e06005c1f9682b0ce92685ed035dfb4ce-rootfs.mount: Deactivated successfully. Mar 6 03:02:50.875062 kubelet[3516]: E0306 03:02:50.874591 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:51.058767 kubelet[3516]: I0306 03:02:51.058614 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:02:51.065862 containerd[1974]: time="2026-03-06T03:02:51.065559705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 03:02:51.143062 kubelet[3516]: I0306 03:02:51.142923 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54ccc84cb6-z4vc8" podStartSLOduration=3.573933277 podStartE2EDuration="9.142905144s" podCreationTimestamp="2026-03-06 03:02:42 +0000 UTC" firstStartedPulling="2026-03-06 03:02:43.513458504 +0000 UTC m=+22.781524983" lastFinishedPulling="2026-03-06 03:02:49.082430375 +0000 UTC m=+28.350496850" observedRunningTime="2026-03-06 03:02:50.078083415 +0000 UTC m=+29.346149914" watchObservedRunningTime="2026-03-06 03:02:51.142905144 +0000 UTC m=+30.410971641" Mar 6 03:02:52.875040 kubelet[3516]: E0306 03:02:52.874480 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:54.874562 kubelet[3516]: E0306 03:02:54.874511 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:56.875223 kubelet[3516]: E0306 03:02:56.875171 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:02:58.876137 kubelet[3516]: E0306 03:02:58.874680 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:00.875497 kubelet[3516]: E0306 03:03:00.875455 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:02.875914 kubelet[3516]: E0306 03:03:02.875275 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:03.735978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223254961.mount: Deactivated successfully. Mar 6 03:03:03.791913 containerd[1974]: time="2026-03-06T03:03:03.791851229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:03.794336 containerd[1974]: time="2026-03-06T03:03:03.794294299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 03:03:03.796291 containerd[1974]: time="2026-03-06T03:03:03.796055042Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:03.799546 containerd[1974]: time="2026-03-06T03:03:03.799480809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:03.800209 containerd[1974]: time="2026-03-06T03:03:03.800095890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.734483258s" Mar 6 03:03:03.800209 containerd[1974]: time="2026-03-06T03:03:03.800152605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 03:03:03.807500 containerd[1974]: time="2026-03-06T03:03:03.807446971Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 03:03:03.861450 containerd[1974]: time="2026-03-06T03:03:03.861347845Z" level=info msg="Container ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:03.864144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954678698.mount: Deactivated successfully. Mar 6 03:03:03.884716 containerd[1974]: time="2026-03-06T03:03:03.884652145Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b\"" Mar 6 03:03:03.885461 containerd[1974]: time="2026-03-06T03:03:03.885340838Z" level=info msg="StartContainer for \"ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b\"" Mar 6 03:03:03.888911 containerd[1974]: time="2026-03-06T03:03:03.888320740Z" level=info msg="connecting to shim ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b" address="unix:///run/containerd/s/559948fd284016f3220100263c325dc634616fff57ec5819fcc67506b1125bd7" protocol=ttrpc version=3 Mar 6 03:03:03.996499 systemd[1]: Started cri-containerd-ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b.scope - libcontainer container ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b. Mar 6 03:03:04.083517 containerd[1974]: time="2026-03-06T03:03:04.083424310Z" level=info msg="StartContainer for \"ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b\" returns successfully" Mar 6 03:03:04.153331 systemd[1]: cri-containerd-ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b.scope: Deactivated successfully. Mar 6 03:03:04.182154 containerd[1974]: time="2026-03-06T03:03:04.182075713Z" level=info msg="received container exit event container_id:\"ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b\" id:\"ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b\" pid:4199 exited_at:{seconds:1772766184 nanos:181843464}" Mar 6 03:03:04.734957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce49f99c993b547f5b7231b2872b2049d4402166273242fcd2f8779020d6675b-rootfs.mount: Deactivated successfully. Mar 6 03:03:04.875787 kubelet[3516]: E0306 03:03:04.875444 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:05.104022 containerd[1974]: time="2026-03-06T03:03:05.103883682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 03:03:06.875051 kubelet[3516]: E0306 03:03:06.874572 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:08.876422 kubelet[3516]: E0306 03:03:08.875996 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:10.254851 containerd[1974]: time="2026-03-06T03:03:10.254810154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:10.256010 containerd[1974]: time="2026-03-06T03:03:10.255969387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 03:03:10.256878 containerd[1974]: time="2026-03-06T03:03:10.256842818Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:10.260093 containerd[1974]: time="2026-03-06T03:03:10.260058923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:10.261990 containerd[1974]: time="2026-03-06T03:03:10.261899069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.157193554s" Mar 6 03:03:10.261990 containerd[1974]: time="2026-03-06T03:03:10.261937131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 03:03:10.268593 containerd[1974]: time="2026-03-06T03:03:10.268548574Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 03:03:10.280479 containerd[1974]: time="2026-03-06T03:03:10.280429824Z" level=info msg="Container 34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:10.293132 containerd[1974]: time="2026-03-06T03:03:10.293054241Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08\"" Mar 6 03:03:10.294377 containerd[1974]: time="2026-03-06T03:03:10.293601255Z" level=info msg="StartContainer for \"34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08\"" Mar 6 03:03:10.295946 containerd[1974]: time="2026-03-06T03:03:10.295872673Z" level=info msg="connecting to shim 34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08" address="unix:///run/containerd/s/559948fd284016f3220100263c325dc634616fff57ec5819fcc67506b1125bd7" protocol=ttrpc version=3 Mar 6 03:03:10.334407 systemd[1]: Started cri-containerd-34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08.scope - libcontainer container 34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08. Mar 6 03:03:10.409555 containerd[1974]: time="2026-03-06T03:03:10.409515747Z" level=info msg="StartContainer for \"34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08\" returns successfully" Mar 6 03:03:10.879686 kubelet[3516]: E0306 03:03:10.879610 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:11.371592 systemd[1]: cri-containerd-34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08.scope: Deactivated successfully. Mar 6 03:03:11.371939 systemd[1]: cri-containerd-34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08.scope: Consumed 623ms CPU time, 168.3M memory peak, 4M read from disk, 177M written to disk. Mar 6 03:03:11.392981 containerd[1974]: time="2026-03-06T03:03:11.392929779Z" level=info msg="received container exit event container_id:\"34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08\" id:\"34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08\" pid:4265 exited_at:{seconds:1772766191 nanos:391780766}" Mar 6 03:03:11.434990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34836b8725c12fb1ebb10fa528242d10a6eca7202f0d327faee849408e0e6a08-rootfs.mount: Deactivated successfully. Mar 6 03:03:11.464267 kubelet[3516]: I0306 03:03:11.463422 3516 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 6 03:03:11.575705 systemd[1]: Created slice kubepods-besteffort-poda78dc96c_a4d4_4f59_b749_46bf525f56a0.slice - libcontainer container kubepods-besteffort-poda78dc96c_a4d4_4f59_b749_46bf525f56a0.slice. Mar 6 03:03:11.599158 systemd[1]: Created slice kubepods-besteffort-pod6ae00c22_7641_4163_a70e_2653ea8e078a.slice - libcontainer container kubepods-besteffort-pod6ae00c22_7641_4163_a70e_2653ea8e078a.slice. Mar 6 03:03:11.612193 systemd[1]: Created slice kubepods-besteffort-podefe7d89f_43e9_412f_b0f0_eaaf818145e0.slice - libcontainer container kubepods-besteffort-podefe7d89f_43e9_412f_b0f0_eaaf818145e0.slice. Mar 6 03:03:11.621511 systemd[1]: Created slice kubepods-besteffort-podebc5bd77_bfeb_4df8_acad_010087b23fac.slice - libcontainer container kubepods-besteffort-podebc5bd77_bfeb_4df8_acad_010087b23fac.slice. Mar 6 03:03:11.633306 systemd[1]: Created slice kubepods-besteffort-pod5334d146_111c_4115_be1f_bc5585aaa496.slice - libcontainer container kubepods-besteffort-pod5334d146_111c_4115_be1f_bc5585aaa496.slice. Mar 6 03:03:11.646512 systemd[1]: Created slice kubepods-burstable-podda0d6522_7f73_44d2_822c_74bfaeb1853a.slice - libcontainer container kubepods-burstable-podda0d6522_7f73_44d2_822c_74bfaeb1853a.slice. Mar 6 03:03:11.657056 systemd[1]: Created slice kubepods-burstable-podd7578741_0fdd_4a51_9663_c9a667059e00.slice - libcontainer container kubepods-burstable-podd7578741_0fdd_4a51_9663_c9a667059e00.slice. Mar 6 03:03:11.693123 kubelet[3516]: I0306 03:03:11.692611 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzbbc\" (UniqueName: \"kubernetes.io/projected/5334d146-111c-4115-be1f-bc5585aaa496-kube-api-access-nzbbc\") pod \"calico-apiserver-5989c68dd5-vzsrv\" (UID: \"5334d146-111c-4115-be1f-bc5585aaa496\") " pod="calico-system/calico-apiserver-5989c68dd5-vzsrv" Mar 6 03:03:11.693123 kubelet[3516]: I0306 03:03:11.692672 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da0d6522-7f73-44d2-822c-74bfaeb1853a-config-volume\") pod \"coredns-66bc5c9577-5ffpf\" (UID: \"da0d6522-7f73-44d2-822c-74bfaeb1853a\") " pod="kube-system/coredns-66bc5c9577-5ffpf" Mar 6 03:03:11.693123 kubelet[3516]: I0306 03:03:11.692828 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78dc96c-a4d4-4f59-b749-46bf525f56a0-config\") pod \"goldmane-cccfbd5cf-5dnxg\" (UID: \"a78dc96c-a4d4-4f59-b749-46bf525f56a0\") " pod="calico-system/goldmane-cccfbd5cf-5dnxg" Mar 6 03:03:11.693123 kubelet[3516]: I0306 03:03:11.692857 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a78dc96c-a4d4-4f59-b749-46bf525f56a0-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-5dnxg\" (UID: \"a78dc96c-a4d4-4f59-b749-46bf525f56a0\") " pod="calico-system/goldmane-cccfbd5cf-5dnxg" Mar 6 03:03:11.693123 kubelet[3516]: I0306 03:03:11.692933 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efe7d89f-43e9-412f-b0f0-eaaf818145e0-tigera-ca-bundle\") pod \"calico-kube-controllers-f7cf46454-n5whp\" (UID: \"efe7d89f-43e9-412f-b0f0-eaaf818145e0\") " pod="calico-system/calico-kube-controllers-f7cf46454-n5whp" Mar 6 03:03:11.693501 kubelet[3516]: I0306 03:03:11.692988 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2xd\" (UniqueName: \"kubernetes.io/projected/efe7d89f-43e9-412f-b0f0-eaaf818145e0-kube-api-access-js2xd\") pod \"calico-kube-controllers-f7cf46454-n5whp\" (UID: \"efe7d89f-43e9-412f-b0f0-eaaf818145e0\") " pod="calico-system/calico-kube-controllers-f7cf46454-n5whp" Mar 6 03:03:11.693501 kubelet[3516]: I0306 03:03:11.693011 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-backend-key-pair\") pod \"whisker-5686db6df6-khczm\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " pod="calico-system/whisker-5686db6df6-khczm" Mar 6 03:03:11.693501 kubelet[3516]: I0306 03:03:11.693074 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6ae00c22-7641-4163-a70e-2653ea8e078a-calico-apiserver-certs\") pod \"calico-apiserver-5989c68dd5-n5d4q\" (UID: \"6ae00c22-7641-4163-a70e-2653ea8e078a\") " pod="calico-system/calico-apiserver-5989c68dd5-n5d4q" Mar 6 03:03:11.693501 kubelet[3516]: I0306 03:03:11.693146 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a78dc96c-a4d4-4f59-b749-46bf525f56a0-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-5dnxg\" (UID: \"a78dc96c-a4d4-4f59-b749-46bf525f56a0\") " pod="calico-system/goldmane-cccfbd5cf-5dnxg" Mar 6 03:03:11.693501 kubelet[3516]: I0306 03:03:11.693168 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-ca-bundle\") pod \"whisker-5686db6df6-khczm\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " pod="calico-system/whisker-5686db6df6-khczm" Mar 6 03:03:11.693714 kubelet[3516]: I0306 03:03:11.693186 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgz9\" (UniqueName: \"kubernetes.io/projected/ebc5bd77-bfeb-4df8-acad-010087b23fac-kube-api-access-9cgz9\") pod \"whisker-5686db6df6-khczm\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " pod="calico-system/whisker-5686db6df6-khczm" Mar 6 03:03:11.693714 kubelet[3516]: I0306 03:03:11.693244 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52dv6\" (UniqueName: \"kubernetes.io/projected/da0d6522-7f73-44d2-822c-74bfaeb1853a-kube-api-access-52dv6\") pod \"coredns-66bc5c9577-5ffpf\" (UID: \"da0d6522-7f73-44d2-822c-74bfaeb1853a\") " pod="kube-system/coredns-66bc5c9577-5ffpf" Mar 6 03:03:11.693714 kubelet[3516]: I0306 03:03:11.693306 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5334d146-111c-4115-be1f-bc5585aaa496-calico-apiserver-certs\") pod \"calico-apiserver-5989c68dd5-vzsrv\" (UID: \"5334d146-111c-4115-be1f-bc5585aaa496\") " pod="calico-system/calico-apiserver-5989c68dd5-vzsrv" Mar 6 03:03:11.693714 kubelet[3516]: I0306 03:03:11.693344 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl987\" (UniqueName: \"kubernetes.io/projected/a78dc96c-a4d4-4f59-b749-46bf525f56a0-kube-api-access-rl987\") pod \"goldmane-cccfbd5cf-5dnxg\" (UID: \"a78dc96c-a4d4-4f59-b749-46bf525f56a0\") " pod="calico-system/goldmane-cccfbd5cf-5dnxg" Mar 6 03:03:11.693714 kubelet[3516]: I0306 03:03:11.693404 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p95p2\" (UniqueName: \"kubernetes.io/projected/6ae00c22-7641-4163-a70e-2653ea8e078a-kube-api-access-p95p2\") pod \"calico-apiserver-5989c68dd5-n5d4q\" (UID: \"6ae00c22-7641-4163-a70e-2653ea8e078a\") " pod="calico-system/calico-apiserver-5989c68dd5-n5d4q" Mar 6 03:03:11.693923 kubelet[3516]: I0306 03:03:11.693498 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-nginx-config\") pod \"whisker-5686db6df6-khczm\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " pod="calico-system/whisker-5686db6df6-khczm" Mar 6 03:03:11.693923 kubelet[3516]: I0306 03:03:11.693566 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7578741-0fdd-4a51-9663-c9a667059e00-config-volume\") pod \"coredns-66bc5c9577-pqjtt\" (UID: \"d7578741-0fdd-4a51-9663-c9a667059e00\") " pod="kube-system/coredns-66bc5c9577-pqjtt" Mar 6 03:03:11.693923 kubelet[3516]: I0306 03:03:11.693592 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwsw\" (UniqueName: \"kubernetes.io/projected/d7578741-0fdd-4a51-9663-c9a667059e00-kube-api-access-9kwsw\") pod \"coredns-66bc5c9577-pqjtt\" (UID: \"d7578741-0fdd-4a51-9663-c9a667059e00\") " pod="kube-system/coredns-66bc5c9577-pqjtt" Mar 6 03:03:11.895039 containerd[1974]: time="2026-03-06T03:03:11.893941444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5dnxg,Uid:a78dc96c-a4d4-4f59-b749-46bf525f56a0,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:11.914035 containerd[1974]: time="2026-03-06T03:03:11.912888014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-n5d4q,Uid:6ae00c22-7641-4163-a70e-2653ea8e078a,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:11.930747 containerd[1974]: time="2026-03-06T03:03:11.930698158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7cf46454-n5whp,Uid:efe7d89f-43e9-412f-b0f0-eaaf818145e0,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:11.933491 containerd[1974]: time="2026-03-06T03:03:11.933452236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5686db6df6-khczm,Uid:ebc5bd77-bfeb-4df8-acad-010087b23fac,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:11.949162 containerd[1974]: time="2026-03-06T03:03:11.949075396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-vzsrv,Uid:5334d146-111c-4115-be1f-bc5585aaa496,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:11.970016 containerd[1974]: time="2026-03-06T03:03:11.969864197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pqjtt,Uid:d7578741-0fdd-4a51-9663-c9a667059e00,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:11.975171 containerd[1974]: time="2026-03-06T03:03:11.975132327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5ffpf,Uid:da0d6522-7f73-44d2-822c-74bfaeb1853a,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:12.160129 containerd[1974]: time="2026-03-06T03:03:12.159420373Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 03:03:12.210378 containerd[1974]: time="2026-03-06T03:03:12.210071267Z" level=info msg="Container cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:12.262403 containerd[1974]: time="2026-03-06T03:03:12.262353037Z" level=info msg="CreateContainer within sandbox \"75ada0cf6e77cb7032a3d0fa261d00e10a48004db3805cc353eefd723b4bb707\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93\"" Mar 6 03:03:12.271219 containerd[1974]: time="2026-03-06T03:03:12.271127705Z" level=info msg="StartContainer for \"cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93\"" Mar 6 03:03:12.280791 containerd[1974]: time="2026-03-06T03:03:12.280720338Z" level=info msg="connecting to shim cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93" address="unix:///run/containerd/s/559948fd284016f3220100263c325dc634616fff57ec5819fcc67506b1125bd7" protocol=ttrpc version=3 Mar 6 03:03:12.426570 systemd[1]: Started cri-containerd-cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93.scope - libcontainer container cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93. Mar 6 03:03:12.475464 containerd[1974]: time="2026-03-06T03:03:12.475401316Z" level=error msg="Failed to destroy network for sandbox \"07aba37d874193625b2c14902aa2a80393ba627e183d068422722937bf4974fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.485677 systemd[1]: run-netns-cni\x2df07ca4fd\x2dce12\x2d27ae\x2d8e2f\x2d4542e5ac992f.mount: Deactivated successfully. Mar 6 03:03:12.487683 containerd[1974]: time="2026-03-06T03:03:12.487267850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5ffpf,Uid:da0d6522-7f73-44d2-822c-74bfaeb1853a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07aba37d874193625b2c14902aa2a80393ba627e183d068422722937bf4974fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.490267 kubelet[3516]: E0306 03:03:12.488642 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07aba37d874193625b2c14902aa2a80393ba627e183d068422722937bf4974fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.490267 kubelet[3516]: E0306 03:03:12.489833 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07aba37d874193625b2c14902aa2a80393ba627e183d068422722937bf4974fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5ffpf" Mar 6 03:03:12.490267 kubelet[3516]: E0306 03:03:12.490084 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07aba37d874193625b2c14902aa2a80393ba627e183d068422722937bf4974fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5ffpf" Mar 6 03:03:12.491841 kubelet[3516]: E0306 03:03:12.491612 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5ffpf_kube-system(da0d6522-7f73-44d2-822c-74bfaeb1853a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5ffpf_kube-system(da0d6522-7f73-44d2-822c-74bfaeb1853a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07aba37d874193625b2c14902aa2a80393ba627e183d068422722937bf4974fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5ffpf" podUID="da0d6522-7f73-44d2-822c-74bfaeb1853a" Mar 6 03:03:12.503264 containerd[1974]: time="2026-03-06T03:03:12.503127183Z" level=error msg="Failed to destroy network for sandbox \"716aae7f904ec553e1031b9ab0cd9ee1cdcabbc5b48246c3fb1a9c260fcc078e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.508180 containerd[1974]: time="2026-03-06T03:03:12.508092024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7cf46454-n5whp,Uid:efe7d89f-43e9-412f-b0f0-eaaf818145e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"716aae7f904ec553e1031b9ab0cd9ee1cdcabbc5b48246c3fb1a9c260fcc078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.509080 systemd[1]: run-netns-cni\x2dbba8f984\x2dbba4\x2d00e2\x2d10bc\x2d4f1672d55ea3.mount: Deactivated successfully. Mar 6 03:03:12.535020 containerd[1974]: time="2026-03-06T03:03:12.534901253Z" level=error msg="Failed to destroy network for sandbox \"1c6bb8acf298bc520f9578ad8bd1878cd4c72e85a81d4ad081e26c1ab16936c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.537096 kubelet[3516]: E0306 03:03:12.535849 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716aae7f904ec553e1031b9ab0cd9ee1cdcabbc5b48246c3fb1a9c260fcc078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.537096 kubelet[3516]: E0306 03:03:12.535917 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716aae7f904ec553e1031b9ab0cd9ee1cdcabbc5b48246c3fb1a9c260fcc078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7cf46454-n5whp" Mar 6 03:03:12.537096 kubelet[3516]: E0306 03:03:12.535942 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716aae7f904ec553e1031b9ab0cd9ee1cdcabbc5b48246c3fb1a9c260fcc078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7cf46454-n5whp" Mar 6 03:03:12.537371 kubelet[3516]: E0306 03:03:12.536015 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f7cf46454-n5whp_calico-system(efe7d89f-43e9-412f-b0f0-eaaf818145e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f7cf46454-n5whp_calico-system(efe7d89f-43e9-412f-b0f0-eaaf818145e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"716aae7f904ec553e1031b9ab0cd9ee1cdcabbc5b48246c3fb1a9c260fcc078e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7cf46454-n5whp" podUID="efe7d89f-43e9-412f-b0f0-eaaf818145e0" Mar 6 03:03:12.537939 containerd[1974]: time="2026-03-06T03:03:12.537791561Z" level=error msg="Failed to destroy network for sandbox \"a812bc4441ca42aef309e4fb4ec9632f220b5cbfa14b0c0c7a0017cc483bcb37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.541129 systemd[1]: run-netns-cni\x2d96be8c25\x2d7e59\x2d0be2\x2d2c4d\x2d08134406432c.mount: Deactivated successfully. Mar 6 03:03:12.546235 systemd[1]: run-netns-cni\x2d39155c83\x2d2716\x2d211a\x2d1a21\x2d8727cc2b5e00.mount: Deactivated successfully. Mar 6 03:03:12.547259 containerd[1974]: time="2026-03-06T03:03:12.544834163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-vzsrv,Uid:5334d146-111c-4115-be1f-bc5585aaa496,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a812bc4441ca42aef309e4fb4ec9632f220b5cbfa14b0c0c7a0017cc483bcb37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.547259 containerd[1974]: time="2026-03-06T03:03:12.545029470Z" level=error msg="Failed to destroy network for sandbox \"7f7df7463c21d5f1d3fb6ba022b36227fc14b584da3512ec6994f3f19fc13cda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.551911 kubelet[3516]: E0306 03:03:12.551602 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a812bc4441ca42aef309e4fb4ec9632f220b5cbfa14b0c0c7a0017cc483bcb37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.551911 kubelet[3516]: E0306 03:03:12.551667 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a812bc4441ca42aef309e4fb4ec9632f220b5cbfa14b0c0c7a0017cc483bcb37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5989c68dd5-vzsrv" Mar 6 03:03:12.551911 kubelet[3516]: E0306 03:03:12.551696 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a812bc4441ca42aef309e4fb4ec9632f220b5cbfa14b0c0c7a0017cc483bcb37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5989c68dd5-vzsrv" Mar 6 03:03:12.552548 kubelet[3516]: E0306 03:03:12.551770 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5989c68dd5-vzsrv_calico-system(5334d146-111c-4115-be1f-bc5585aaa496)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5989c68dd5-vzsrv_calico-system(5334d146-111c-4115-be1f-bc5585aaa496)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a812bc4441ca42aef309e4fb4ec9632f220b5cbfa14b0c0c7a0017cc483bcb37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5989c68dd5-vzsrv" podUID="5334d146-111c-4115-be1f-bc5585aaa496" Mar 6 03:03:12.552638 containerd[1974]: time="2026-03-06T03:03:12.551946600Z" level=error msg="Failed to destroy network for sandbox \"ca724f26628a45a426e45beb2528c3947eea95e23c576882bb53969a023008e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.552873 containerd[1974]: time="2026-03-06T03:03:12.552725402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5dnxg,Uid:a78dc96c-a4d4-4f59-b749-46bf525f56a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6bb8acf298bc520f9578ad8bd1878cd4c72e85a81d4ad081e26c1ab16936c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.553645 kubelet[3516]: E0306 03:03:12.553581 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6bb8acf298bc520f9578ad8bd1878cd4c72e85a81d4ad081e26c1ab16936c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.553709 kubelet[3516]: E0306 03:03:12.553651 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6bb8acf298bc520f9578ad8bd1878cd4c72e85a81d4ad081e26c1ab16936c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-5dnxg" Mar 6 03:03:12.553709 kubelet[3516]: E0306 03:03:12.553674 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6bb8acf298bc520f9578ad8bd1878cd4c72e85a81d4ad081e26c1ab16936c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-5dnxg" Mar 6 03:03:12.553992 kubelet[3516]: E0306 03:03:12.553843 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-5dnxg_calico-system(a78dc96c-a4d4-4f59-b749-46bf525f56a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-5dnxg_calico-system(a78dc96c-a4d4-4f59-b749-46bf525f56a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c6bb8acf298bc520f9578ad8bd1878cd4c72e85a81d4ad081e26c1ab16936c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-5dnxg" podUID="a78dc96c-a4d4-4f59-b749-46bf525f56a0" Mar 6 03:03:12.554425 containerd[1974]: time="2026-03-06T03:03:12.554300911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5686db6df6-khczm,Uid:ebc5bd77-bfeb-4df8-acad-010087b23fac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7df7463c21d5f1d3fb6ba022b36227fc14b584da3512ec6994f3f19fc13cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.554988 kubelet[3516]: E0306 03:03:12.554920 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7df7463c21d5f1d3fb6ba022b36227fc14b584da3512ec6994f3f19fc13cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.554988 kubelet[3516]: E0306 03:03:12.554970 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7df7463c21d5f1d3fb6ba022b36227fc14b584da3512ec6994f3f19fc13cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5686db6df6-khczm" Mar 6 03:03:12.555463 kubelet[3516]: E0306 03:03:12.554991 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7df7463c21d5f1d3fb6ba022b36227fc14b584da3512ec6994f3f19fc13cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5686db6df6-khczm" Mar 6 03:03:12.555463 kubelet[3516]: E0306 03:03:12.555336 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5686db6df6-khczm_calico-system(ebc5bd77-bfeb-4df8-acad-010087b23fac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5686db6df6-khczm_calico-system(ebc5bd77-bfeb-4df8-acad-010087b23fac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f7df7463c21d5f1d3fb6ba022b36227fc14b584da3512ec6994f3f19fc13cda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5686db6df6-khczm" podUID="ebc5bd77-bfeb-4df8-acad-010087b23fac" Mar 6 03:03:12.555879 kubelet[3516]: E0306 03:03:12.555671 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca724f26628a45a426e45beb2528c3947eea95e23c576882bb53969a023008e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.555879 kubelet[3516]: E0306 03:03:12.555714 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca724f26628a45a426e45beb2528c3947eea95e23c576882bb53969a023008e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pqjtt" Mar 6 03:03:12.555879 kubelet[3516]: E0306 03:03:12.555736 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca724f26628a45a426e45beb2528c3947eea95e23c576882bb53969a023008e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pqjtt" Mar 6 03:03:12.556024 containerd[1974]: time="2026-03-06T03:03:12.555438678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pqjtt,Uid:d7578741-0fdd-4a51-9663-c9a667059e00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca724f26628a45a426e45beb2528c3947eea95e23c576882bb53969a023008e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.556645 kubelet[3516]: E0306 03:03:12.556174 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pqjtt_kube-system(d7578741-0fdd-4a51-9663-c9a667059e00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pqjtt_kube-system(d7578741-0fdd-4a51-9663-c9a667059e00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca724f26628a45a426e45beb2528c3947eea95e23c576882bb53969a023008e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pqjtt" podUID="d7578741-0fdd-4a51-9663-c9a667059e00" Mar 6 03:03:12.563959 containerd[1974]: time="2026-03-06T03:03:12.563892727Z" level=error msg="Failed to destroy network for sandbox \"71f2ec8fef8a40562f24b3931256352d79bb7124caadd3a15badc8b3f7c39b5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.565282 containerd[1974]: time="2026-03-06T03:03:12.565203967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-n5d4q,Uid:6ae00c22-7641-4163-a70e-2653ea8e078a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f2ec8fef8a40562f24b3931256352d79bb7124caadd3a15badc8b3f7c39b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.565826 kubelet[3516]: E0306 03:03:12.565701 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f2ec8fef8a40562f24b3931256352d79bb7124caadd3a15badc8b3f7c39b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.566038 kubelet[3516]: E0306 03:03:12.565798 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f2ec8fef8a40562f24b3931256352d79bb7124caadd3a15badc8b3f7c39b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5989c68dd5-n5d4q" Mar 6 03:03:12.566038 kubelet[3516]: E0306 03:03:12.565934 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f2ec8fef8a40562f24b3931256352d79bb7124caadd3a15badc8b3f7c39b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5989c68dd5-n5d4q" Mar 6 03:03:12.567217 kubelet[3516]: E0306 03:03:12.567007 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5989c68dd5-n5d4q_calico-system(6ae00c22-7641-4163-a70e-2653ea8e078a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5989c68dd5-n5d4q_calico-system(6ae00c22-7641-4163-a70e-2653ea8e078a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71f2ec8fef8a40562f24b3931256352d79bb7124caadd3a15badc8b3f7c39b5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5989c68dd5-n5d4q" podUID="6ae00c22-7641-4163-a70e-2653ea8e078a" Mar 6 03:03:12.633533 containerd[1974]: time="2026-03-06T03:03:12.633472900Z" level=info msg="StartContainer for \"cedf7bb5ac8a890817369ec7fb11d58c9cdb90611b790cd525195701f5be5d93\" returns successfully" Mar 6 03:03:12.881784 systemd[1]: Created slice kubepods-besteffort-pod3b6ac159_4573_487d_821b_a2953dcf4b25.slice - libcontainer container kubepods-besteffort-pod3b6ac159_4573_487d_821b_a2953dcf4b25.slice. Mar 6 03:03:12.886449 containerd[1974]: time="2026-03-06T03:03:12.886405680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wnzw,Uid:3b6ac159-4573-487d-821b-a2953dcf4b25,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:12.945996 containerd[1974]: time="2026-03-06T03:03:12.945941103Z" level=error msg="Failed to destroy network for sandbox \"6336ee9c330d0a2e4a5dfb6447c58fc8d82d69dee4e9eab32ffd955c41d6ba54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.950806 containerd[1974]: time="2026-03-06T03:03:12.950750718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wnzw,Uid:3b6ac159-4573-487d-821b-a2953dcf4b25,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6336ee9c330d0a2e4a5dfb6447c58fc8d82d69dee4e9eab32ffd955c41d6ba54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.951468 kubelet[3516]: E0306 03:03:12.951219 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6336ee9c330d0a2e4a5dfb6447c58fc8d82d69dee4e9eab32ffd955c41d6ba54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:12.951468 kubelet[3516]: E0306 03:03:12.951301 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6336ee9c330d0a2e4a5dfb6447c58fc8d82d69dee4e9eab32ffd955c41d6ba54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:03:12.951468 kubelet[3516]: E0306 03:03:12.951339 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6336ee9c330d0a2e4a5dfb6447c58fc8d82d69dee4e9eab32ffd955c41d6ba54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5wnzw" Mar 6 03:03:12.951652 kubelet[3516]: E0306 03:03:12.951420 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5wnzw_calico-system(3b6ac159-4573-487d-821b-a2953dcf4b25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5wnzw_calico-system(3b6ac159-4573-487d-821b-a2953dcf4b25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6336ee9c330d0a2e4a5dfb6447c58fc8d82d69dee4e9eab32ffd955c41d6ba54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5wnzw" podUID="3b6ac159-4573-487d-821b-a2953dcf4b25" Mar 6 03:03:13.175802 kubelet[3516]: I0306 03:03:13.175654 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pb8t4" podStartSLOduration=3.516678513 podStartE2EDuration="30.175636323s" podCreationTimestamp="2026-03-06 03:02:43 +0000 UTC" firstStartedPulling="2026-03-06 03:02:43.603822778 +0000 UTC m=+22.871889255" lastFinishedPulling="2026-03-06 03:03:10.26278059 +0000 UTC m=+49.530847065" observedRunningTime="2026-03-06 03:03:13.175002057 +0000 UTC m=+52.443068556" watchObservedRunningTime="2026-03-06 03:03:13.175636323 +0000 UTC m=+52.443702820" Mar 6 03:03:13.434799 systemd[1]: run-netns-cni\x2dc16f6194\x2d9c2f\x2d18db\x2d2282\x2d69a2fbd38398.mount: Deactivated successfully. Mar 6 03:03:13.435348 systemd[1]: run-netns-cni\x2d7a3db7bf\x2d5340\x2dd46c\x2da63b\x2de9e07c14afd3.mount: Deactivated successfully. Mar 6 03:03:13.435438 systemd[1]: run-netns-cni\x2d79316684\x2d6f45\x2dce03\x2d12c9\x2db8048deaeaaa.mount: Deactivated successfully. Mar 6 03:03:13.917153 kubelet[3516]: I0306 03:03:13.917061 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-nginx-config\") pod \"ebc5bd77-bfeb-4df8-acad-010087b23fac\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " Mar 6 03:03:13.918688 kubelet[3516]: I0306 03:03:13.918211 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-backend-key-pair\") pod \"ebc5bd77-bfeb-4df8-acad-010087b23fac\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " Mar 6 03:03:13.918688 kubelet[3516]: I0306 03:03:13.918254 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-ca-bundle\") pod \"ebc5bd77-bfeb-4df8-acad-010087b23fac\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " Mar 6 03:03:13.918688 kubelet[3516]: I0306 03:03:13.918289 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cgz9\" (UniqueName: \"kubernetes.io/projected/ebc5bd77-bfeb-4df8-acad-010087b23fac-kube-api-access-9cgz9\") pod \"ebc5bd77-bfeb-4df8-acad-010087b23fac\" (UID: \"ebc5bd77-bfeb-4df8-acad-010087b23fac\") " Mar 6 03:03:13.921468 kubelet[3516]: I0306 03:03:13.918118 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ebc5bd77-bfeb-4df8-acad-010087b23fac" (UID: "ebc5bd77-bfeb-4df8-acad-010087b23fac"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 03:03:13.921468 kubelet[3516]: I0306 03:03:13.921216 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ebc5bd77-bfeb-4df8-acad-010087b23fac" (UID: "ebc5bd77-bfeb-4df8-acad-010087b23fac"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 03:03:13.926275 kubelet[3516]: I0306 03:03:13.925986 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc5bd77-bfeb-4df8-acad-010087b23fac-kube-api-access-9cgz9" (OuterVolumeSpecName: "kube-api-access-9cgz9") pod "ebc5bd77-bfeb-4df8-acad-010087b23fac" (UID: "ebc5bd77-bfeb-4df8-acad-010087b23fac"). InnerVolumeSpecName "kube-api-access-9cgz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 03:03:13.931160 kubelet[3516]: I0306 03:03:13.930820 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ebc5bd77-bfeb-4df8-acad-010087b23fac" (UID: "ebc5bd77-bfeb-4df8-acad-010087b23fac"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 03:03:13.932807 systemd[1]: var-lib-kubelet-pods-ebc5bd77\x2dbfeb\x2d4df8\x2dacad\x2d010087b23fac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cgz9.mount: Deactivated successfully. Mar 6 03:03:13.932959 systemd[1]: var-lib-kubelet-pods-ebc5bd77\x2dbfeb\x2d4df8\x2dacad\x2d010087b23fac-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 03:03:14.019553 kubelet[3516]: I0306 03:03:14.019510 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-ca-bundle\") on node \"ip-172-31-19-55\" DevicePath \"\"" Mar 6 03:03:14.019855 kubelet[3516]: I0306 03:03:14.019791 3516 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9cgz9\" (UniqueName: \"kubernetes.io/projected/ebc5bd77-bfeb-4df8-acad-010087b23fac-kube-api-access-9cgz9\") on node \"ip-172-31-19-55\" DevicePath \"\"" Mar 6 03:03:14.020047 kubelet[3516]: I0306 03:03:14.020025 3516 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ebc5bd77-bfeb-4df8-acad-010087b23fac-nginx-config\") on node \"ip-172-31-19-55\" DevicePath \"\"" Mar 6 03:03:14.022193 kubelet[3516]: I0306 03:03:14.022168 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebc5bd77-bfeb-4df8-acad-010087b23fac-whisker-backend-key-pair\") on node \"ip-172-31-19-55\" DevicePath \"\"" Mar 6 03:03:14.161565 systemd[1]: Removed slice kubepods-besteffort-podebc5bd77_bfeb_4df8_acad_010087b23fac.slice - libcontainer container kubepods-besteffort-podebc5bd77_bfeb_4df8_acad_010087b23fac.slice. Mar 6 03:03:14.266201 systemd[1]: Created slice kubepods-besteffort-pod9d7a7e83_0e57_4bac_b9a8_af80336af1d8.slice - libcontainer container kubepods-besteffort-pod9d7a7e83_0e57_4bac_b9a8_af80336af1d8.slice. Mar 6 03:03:14.324949 kubelet[3516]: I0306 03:03:14.324903 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d7a7e83-0e57-4bac-b9a8-af80336af1d8-whisker-backend-key-pair\") pod \"whisker-6c7985cc6f-dmngk\" (UID: \"9d7a7e83-0e57-4bac-b9a8-af80336af1d8\") " pod="calico-system/whisker-6c7985cc6f-dmngk" Mar 6 03:03:14.324949 kubelet[3516]: I0306 03:03:14.324953 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9d7a7e83-0e57-4bac-b9a8-af80336af1d8-nginx-config\") pod \"whisker-6c7985cc6f-dmngk\" (UID: \"9d7a7e83-0e57-4bac-b9a8-af80336af1d8\") " pod="calico-system/whisker-6c7985cc6f-dmngk" Mar 6 03:03:14.325200 kubelet[3516]: I0306 03:03:14.324981 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7a7e83-0e57-4bac-b9a8-af80336af1d8-whisker-ca-bundle\") pod \"whisker-6c7985cc6f-dmngk\" (UID: \"9d7a7e83-0e57-4bac-b9a8-af80336af1d8\") " pod="calico-system/whisker-6c7985cc6f-dmngk" Mar 6 03:03:14.325200 kubelet[3516]: I0306 03:03:14.325021 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdq2v\" (UniqueName: \"kubernetes.io/projected/9d7a7e83-0e57-4bac-b9a8-af80336af1d8-kube-api-access-vdq2v\") pod \"whisker-6c7985cc6f-dmngk\" (UID: \"9d7a7e83-0e57-4bac-b9a8-af80336af1d8\") " pod="calico-system/whisker-6c7985cc6f-dmngk" Mar 6 03:03:14.574947 containerd[1974]: time="2026-03-06T03:03:14.574833551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7985cc6f-dmngk,Uid:9d7a7e83-0e57-4bac-b9a8-af80336af1d8,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:14.831309 systemd-networkd[1587]: calidf7f832f654: Link UP Mar 6 03:03:14.832290 systemd-networkd[1587]: calidf7f832f654: Gained carrier Mar 6 03:03:14.851948 (udev-worker)[4645]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:14.874523 containerd[1974]: 2026-03-06 03:03:14.604 [ERROR][4623] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 03:03:14.874523 containerd[1974]: 2026-03-06 03:03:14.666 [INFO][4623] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0 whisker-6c7985cc6f- calico-system 9d7a7e83-0e57-4bac-b9a8-af80336af1d8 915 0 2026-03-06 03:03:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c7985cc6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-55 whisker-6c7985cc6f-dmngk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf7f832f654 [] [] }} ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-" Mar 6 03:03:14.874523 containerd[1974]: 2026-03-06 03:03:14.666 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.874523 containerd[1974]: 2026-03-06 03:03:14.708 [INFO][4635] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" HandleID="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Workload="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.717 [INFO][4635] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" HandleID="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Workload="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380340), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-55", "pod":"whisker-6c7985cc6f-dmngk", "timestamp":"2026-03-06 03:03:14.708275432 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188840)} Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.717 [INFO][4635] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.717 [INFO][4635] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.717 [INFO][4635] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.720 [INFO][4635] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" host="ip-172-31-19-55" Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.726 [INFO][4635] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.731 [INFO][4635] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.733 [INFO][4635] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:14.874852 containerd[1974]: 2026-03-06 03:03:14.735 [INFO][4635] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.735 [INFO][4635] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" host="ip-172-31-19-55" Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.737 [INFO][4635] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6 Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.742 [INFO][4635] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" host="ip-172-31-19-55" Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.751 [INFO][4635] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.65/26] block=192.168.109.64/26 handle="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" host="ip-172-31-19-55" Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.751 [INFO][4635] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.65/26] handle="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" host="ip-172-31-19-55" Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.751 [INFO][4635] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:14.875250 containerd[1974]: 2026-03-06 03:03:14.751 [INFO][4635] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.65/26] IPv6=[] ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" HandleID="k8s-pod-network.08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Workload="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.875503 containerd[1974]: 2026-03-06 03:03:14.755 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0", GenerateName:"whisker-6c7985cc6f-", Namespace:"calico-system", SelfLink:"", UID:"9d7a7e83-0e57-4bac-b9a8-af80336af1d8", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7985cc6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"whisker-6c7985cc6f-dmngk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf7f832f654", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:14.875503 containerd[1974]: 2026-03-06 03:03:14.755 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.65/32] ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.875641 containerd[1974]: 2026-03-06 03:03:14.755 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf7f832f654 ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.875641 containerd[1974]: 2026-03-06 03:03:14.835 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.875715 containerd[1974]: 2026-03-06 03:03:14.835 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0", GenerateName:"whisker-6c7985cc6f-", Namespace:"calico-system", SelfLink:"", UID:"9d7a7e83-0e57-4bac-b9a8-af80336af1d8", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7985cc6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6", Pod:"whisker-6c7985cc6f-dmngk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf7f832f654", MAC:"4a:d4:f0:95:8b:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:14.875805 containerd[1974]: 2026-03-06 03:03:14.866 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" Namespace="calico-system" Pod="whisker-6c7985cc6f-dmngk" WorkloadEndpoint="ip--172--31--19--55-k8s-whisker--6c7985cc6f--dmngk-eth0" Mar 6 03:03:14.902790 kubelet[3516]: I0306 03:03:14.902160 3516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc5bd77-bfeb-4df8-acad-010087b23fac" path="/var/lib/kubelet/pods/ebc5bd77-bfeb-4df8-acad-010087b23fac/volumes" Mar 6 03:03:14.979130 containerd[1974]: time="2026-03-06T03:03:14.979056945Z" level=info msg="connecting to shim 08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6" address="unix:///run/containerd/s/f1d3b27ed22f853578c002033b8ceb3bbce6a5cce4fa18f7022554180f77dee0" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:15.028443 systemd[1]: Started cri-containerd-08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6.scope - libcontainer container 08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6. Mar 6 03:03:15.092188 containerd[1974]: time="2026-03-06T03:03:15.091954983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7985cc6f-dmngk,Uid:9d7a7e83-0e57-4bac-b9a8-af80336af1d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6\"" Mar 6 03:03:15.094306 containerd[1974]: time="2026-03-06T03:03:15.094275339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 03:03:15.945068 systemd-networkd[1587]: calidf7f832f654: Gained IPv6LL Mar 6 03:03:17.377756 systemd-networkd[1587]: vxlan.calico: Link UP Mar 6 03:03:17.377766 systemd-networkd[1587]: vxlan.calico: Gained carrier Mar 6 03:03:17.451684 (udev-worker)[4885]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:17.463847 (udev-worker)[4644]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:17.802471 containerd[1974]: time="2026-03-06T03:03:17.800621644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 03:03:17.877464 containerd[1974]: time="2026-03-06T03:03:17.877408631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.750831854s" Mar 6 03:03:17.877670 containerd[1974]: time="2026-03-06T03:03:17.877648121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 03:03:17.877970 containerd[1974]: time="2026-03-06T03:03:17.877948506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:17.937122 containerd[1974]: time="2026-03-06T03:03:17.936051768Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:17.946123 containerd[1974]: time="2026-03-06T03:03:17.944744138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:18.167256 containerd[1974]: time="2026-03-06T03:03:18.167213009Z" level=info msg="CreateContainer within sandbox \"08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 03:03:18.183171 containerd[1974]: time="2026-03-06T03:03:18.182828201Z" level=info msg="Container 8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:18.199046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221887207.mount: Deactivated successfully. Mar 6 03:03:18.225337 containerd[1974]: time="2026-03-06T03:03:18.225279505Z" level=info msg="CreateContainer within sandbox \"08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5\"" Mar 6 03:03:18.227523 containerd[1974]: time="2026-03-06T03:03:18.227491805Z" level=info msg="StartContainer for \"8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5\"" Mar 6 03:03:18.231904 containerd[1974]: time="2026-03-06T03:03:18.231860322Z" level=info msg="connecting to shim 8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5" address="unix:///run/containerd/s/f1d3b27ed22f853578c002033b8ceb3bbce6a5cce4fa18f7022554180f77dee0" protocol=ttrpc version=3 Mar 6 03:03:18.480145 systemd[1]: Started cri-containerd-8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5.scope - libcontainer container 8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5. Mar 6 03:03:18.619871 containerd[1974]: time="2026-03-06T03:03:18.619787876Z" level=info msg="StartContainer for \"8d8ff40f4b57ceea8377549b7536b09ff6bfe9eab86779a771220eefd66905c5\" returns successfully" Mar 6 03:03:18.637434 containerd[1974]: time="2026-03-06T03:03:18.637395453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 03:03:19.398273 systemd-networkd[1587]: vxlan.calico: Gained IPv6LL Mar 6 03:03:20.660469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071270492.mount: Deactivated successfully. Mar 6 03:03:20.680098 containerd[1974]: time="2026-03-06T03:03:20.680046947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:20.681396 containerd[1974]: time="2026-03-06T03:03:20.681254010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 03:03:20.682474 containerd[1974]: time="2026-03-06T03:03:20.682441386Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:20.684711 containerd[1974]: time="2026-03-06T03:03:20.684672968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:20.685756 containerd[1974]: time="2026-03-06T03:03:20.685321757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.047877124s" Mar 6 03:03:20.685756 containerd[1974]: time="2026-03-06T03:03:20.685359527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 03:03:20.690581 containerd[1974]: time="2026-03-06T03:03:20.690543649Z" level=info msg="CreateContainer within sandbox \"08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 03:03:20.713738 containerd[1974]: time="2026-03-06T03:03:20.713681388Z" level=info msg="Container 0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:20.721948 containerd[1974]: time="2026-03-06T03:03:20.721898298Z" level=info msg="CreateContainer within sandbox \"08dc6079e3c957b9822c6728b0aa97c3095d5a582884ccb1d17e6db8caaa40c6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9\"" Mar 6 03:03:20.722559 containerd[1974]: time="2026-03-06T03:03:20.722515979Z" level=info msg="StartContainer for \"0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9\"" Mar 6 03:03:20.724607 containerd[1974]: time="2026-03-06T03:03:20.724571600Z" level=info msg="connecting to shim 0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9" address="unix:///run/containerd/s/f1d3b27ed22f853578c002033b8ceb3bbce6a5cce4fa18f7022554180f77dee0" protocol=ttrpc version=3 Mar 6 03:03:20.754392 systemd[1]: Started cri-containerd-0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9.scope - libcontainer container 0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9. Mar 6 03:03:20.812574 containerd[1974]: time="2026-03-06T03:03:20.812534897Z" level=info msg="StartContainer for \"0e1087da63f487a379be6f233bd04930d01e281c429ae475f1ba102b55bdb8f9\" returns successfully" Mar 6 03:03:21.207644 kubelet[3516]: I0306 03:03:21.205428 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c7985cc6f-dmngk" podStartSLOduration=1.609966053 podStartE2EDuration="7.202642294s" podCreationTimestamp="2026-03-06 03:03:14 +0000 UTC" firstStartedPulling="2026-03-06 03:03:15.093880821 +0000 UTC m=+54.361947297" lastFinishedPulling="2026-03-06 03:03:20.686557046 +0000 UTC m=+59.954623538" observedRunningTime="2026-03-06 03:03:21.2022735 +0000 UTC m=+60.470339998" watchObservedRunningTime="2026-03-06 03:03:21.202642294 +0000 UTC m=+60.470708793" Mar 6 03:03:22.294355 ntpd[1949]: Listen normally on 6 vxlan.calico 192.168.109.64:123 Mar 6 03:03:22.294452 ntpd[1949]: Listen normally on 7 calidf7f832f654 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 03:03:22.295966 ntpd[1949]: 6 Mar 03:03:22 ntpd[1949]: Listen normally on 6 vxlan.calico 192.168.109.64:123 Mar 6 03:03:22.295966 ntpd[1949]: 6 Mar 03:03:22 ntpd[1949]: Listen normally on 7 calidf7f832f654 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 03:03:22.295966 ntpd[1949]: 6 Mar 03:03:22 ntpd[1949]: Listen normally on 8 vxlan.calico [fe80::6401:8dff:feaa:cf53%5]:123 Mar 6 03:03:22.294485 ntpd[1949]: Listen normally on 8 vxlan.calico [fe80::6401:8dff:feaa:cf53%5]:123 Mar 6 03:03:22.881931 containerd[1974]: time="2026-03-06T03:03:22.881611434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5ffpf,Uid:da0d6522-7f73-44d2-822c-74bfaeb1853a,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:23.344544 systemd-networkd[1587]: cali7ef0b620530: Link UP Mar 6 03:03:23.345980 systemd-networkd[1587]: cali7ef0b620530: Gained carrier Mar 6 03:03:23.348835 (udev-worker)[5051]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:23.372403 containerd[1974]: 2026-03-06 03:03:23.045 [INFO][5015] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0 coredns-66bc5c9577- kube-system da0d6522-7f73-44d2-822c-74bfaeb1853a 858 0 2026-03-06 03:02:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-55 coredns-66bc5c9577-5ffpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ef0b620530 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-" Mar 6 03:03:23.372403 containerd[1974]: 2026-03-06 03:03:23.049 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.372403 containerd[1974]: 2026-03-06 03:03:23.271 [INFO][5027] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" HandleID="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Workload="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.289 [INFO][5027] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" HandleID="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Workload="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bfde0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-55", "pod":"coredns-66bc5c9577-5ffpf", "timestamp":"2026-03-06 03:03:23.271782261 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00051b600)} Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.289 [INFO][5027] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.289 [INFO][5027] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.292 [INFO][5027] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.297 [INFO][5027] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" host="ip-172-31-19-55" Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.305 [INFO][5027] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.312 [INFO][5027] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.315 [INFO][5027] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:23.372975 containerd[1974]: 2026-03-06 03:03:23.317 [INFO][5027] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.317 [INFO][5027] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" host="ip-172-31-19-55" Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.319 [INFO][5027] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.326 [INFO][5027] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" host="ip-172-31-19-55" Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.334 [INFO][5027] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.66/26] block=192.168.109.64/26 handle="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" host="ip-172-31-19-55" Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.334 [INFO][5027] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.66/26] handle="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" host="ip-172-31-19-55" Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.334 [INFO][5027] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:23.373456 containerd[1974]: 2026-03-06 03:03:23.334 [INFO][5027] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.66/26] IPv6=[] ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" HandleID="k8s-pod-network.f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Workload="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.374043 containerd[1974]: 2026-03-06 03:03:23.337 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"da0d6522-7f73-44d2-822c-74bfaeb1853a", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"coredns-66bc5c9577-5ffpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ef0b620530", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:23.374043 containerd[1974]: 2026-03-06 03:03:23.337 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.66/32] ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.374043 containerd[1974]: 2026-03-06 03:03:23.337 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ef0b620530 ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.374043 containerd[1974]: 2026-03-06 03:03:23.346 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.374043 containerd[1974]: 2026-03-06 03:03:23.347 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"da0d6522-7f73-44d2-822c-74bfaeb1853a", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce", Pod:"coredns-66bc5c9577-5ffpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ef0b620530", MAC:"ce:1b:80:da:6c:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:23.374043 containerd[1974]: 2026-03-06 03:03:23.367 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" Namespace="kube-system" Pod="coredns-66bc5c9577-5ffpf" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--5ffpf-eth0" Mar 6 03:03:23.450031 containerd[1974]: time="2026-03-06T03:03:23.449331689Z" level=info msg="connecting to shim f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce" address="unix:///run/containerd/s/412ad838efea4494ab546156388e7c9cf41f8c356f1c6e2847932274efd3d6e0" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:23.496570 systemd[1]: Started cri-containerd-f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce.scope - libcontainer container f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce. Mar 6 03:03:23.570001 containerd[1974]: time="2026-03-06T03:03:23.569943196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5ffpf,Uid:da0d6522-7f73-44d2-822c-74bfaeb1853a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce\"" Mar 6 03:03:23.576115 containerd[1974]: time="2026-03-06T03:03:23.576060142Z" level=info msg="CreateContainer within sandbox \"f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 03:03:23.612124 containerd[1974]: time="2026-03-06T03:03:23.609311428Z" level=info msg="Container d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:23.614000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057646185.mount: Deactivated successfully. Mar 6 03:03:23.618992 containerd[1974]: time="2026-03-06T03:03:23.618945343Z" level=info msg="CreateContainer within sandbox \"f902de0071d509f55296f4b1709ffb0f312809c2b3053bded73c11d62e7839ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2\"" Mar 6 03:03:23.620128 containerd[1974]: time="2026-03-06T03:03:23.619615832Z" level=info msg="StartContainer for \"d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2\"" Mar 6 03:03:23.620955 containerd[1974]: time="2026-03-06T03:03:23.620913404Z" level=info msg="connecting to shim d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2" address="unix:///run/containerd/s/412ad838efea4494ab546156388e7c9cf41f8c356f1c6e2847932274efd3d6e0" protocol=ttrpc version=3 Mar 6 03:03:23.647440 systemd[1]: Started cri-containerd-d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2.scope - libcontainer container d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2. Mar 6 03:03:23.685206 containerd[1974]: time="2026-03-06T03:03:23.685166138Z" level=info msg="StartContainer for \"d0b4ba42835b466c512631d258f76c4b2e057a83e47587d5296aa137a1bf1aa2\" returns successfully" Mar 6 03:03:23.877509 containerd[1974]: time="2026-03-06T03:03:23.877397183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5dnxg,Uid:a78dc96c-a4d4-4f59-b749-46bf525f56a0,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:23.894681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347909215.mount: Deactivated successfully. Mar 6 03:03:24.015582 systemd-networkd[1587]: cali1351078169d: Link UP Mar 6 03:03:24.016553 (udev-worker)[5053]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:24.017268 systemd-networkd[1587]: cali1351078169d: Gained carrier Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.929 [INFO][5145] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0 goldmane-cccfbd5cf- calico-system a78dc96c-a4d4-4f59-b749-46bf525f56a0 848 0 2026-03-06 03:02:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-55 goldmane-cccfbd5cf-5dnxg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1351078169d [] [] }} ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.929 [INFO][5145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.961 [INFO][5156] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" HandleID="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Workload="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.968 [INFO][5156] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" HandleID="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Workload="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef510), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-55", "pod":"goldmane-cccfbd5cf-5dnxg", "timestamp":"2026-03-06 03:03:23.961121443 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e9080)} Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.968 [INFO][5156] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.969 [INFO][5156] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.969 [INFO][5156] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.974 [INFO][5156] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.982 [INFO][5156] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.988 [INFO][5156] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.990 [INFO][5156] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.992 [INFO][5156] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.993 [INFO][5156] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.994 [INFO][5156] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0 Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:23.999 [INFO][5156] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:24.008 [INFO][5156] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.67/26] block=192.168.109.64/26 handle="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:24.008 [INFO][5156] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.67/26] handle="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" host="ip-172-31-19-55" Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:24.009 [INFO][5156] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:24.041386 containerd[1974]: 2026-03-06 03:03:24.009 [INFO][5156] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.67/26] IPv6=[] ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" HandleID="k8s-pod-network.296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Workload="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.044628 containerd[1974]: 2026-03-06 03:03:24.012 [INFO][5145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"a78dc96c-a4d4-4f59-b749-46bf525f56a0", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"goldmane-cccfbd5cf-5dnxg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1351078169d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:24.044628 containerd[1974]: 2026-03-06 03:03:24.012 [INFO][5145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.67/32] ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.044628 containerd[1974]: 2026-03-06 03:03:24.012 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1351078169d ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.044628 containerd[1974]: 2026-03-06 03:03:24.015 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.044628 containerd[1974]: 2026-03-06 03:03:24.016 [INFO][5145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"a78dc96c-a4d4-4f59-b749-46bf525f56a0", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0", Pod:"goldmane-cccfbd5cf-5dnxg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1351078169d", MAC:"56:95:bf:2b:3e:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:24.044628 containerd[1974]: 2026-03-06 03:03:24.034 [INFO][5145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5dnxg" WorkloadEndpoint="ip--172--31--19--55-k8s-goldmane--cccfbd5cf--5dnxg-eth0" Mar 6 03:03:24.080233 containerd[1974]: time="2026-03-06T03:03:24.080179807Z" level=info msg="connecting to shim 296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0" address="unix:///run/containerd/s/792658635613af17a414cc238f1ec9a23d0f8900557103f560043647a87202bf" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:24.119368 systemd[1]: Started cri-containerd-296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0.scope - libcontainer container 296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0. Mar 6 03:03:24.218920 containerd[1974]: time="2026-03-06T03:03:24.218848328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5dnxg,Uid:a78dc96c-a4d4-4f59-b749-46bf525f56a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0\"" Mar 6 03:03:24.221530 containerd[1974]: time="2026-03-06T03:03:24.221493965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 03:03:24.580484 systemd-networkd[1587]: cali7ef0b620530: Gained IPv6LL Mar 6 03:03:25.876963 containerd[1974]: time="2026-03-06T03:03:25.876908756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-n5d4q,Uid:6ae00c22-7641-4163-a70e-2653ea8e078a,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:25.880981 containerd[1974]: time="2026-03-06T03:03:25.879002197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7cf46454-n5whp,Uid:efe7d89f-43e9-412f-b0f0-eaaf818145e0,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:26.053990 systemd-networkd[1587]: cali1351078169d: Gained IPv6LL Mar 6 03:03:26.072066 systemd-networkd[1587]: cali13ab6802d6d: Link UP Mar 6 03:03:26.074111 systemd-networkd[1587]: cali13ab6802d6d: Gained carrier Mar 6 03:03:26.097642 kubelet[3516]: I0306 03:03:26.096879 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5ffpf" podStartSLOduration=59.096855458 podStartE2EDuration="59.096855458s" podCreationTimestamp="2026-03-06 03:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:03:24.232632727 +0000 UTC m=+63.500699225" watchObservedRunningTime="2026-03-06 03:03:26.096855458 +0000 UTC m=+65.364921956" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:25.953 [INFO][5240] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0 calico-kube-controllers-f7cf46454- calico-system efe7d89f-43e9-412f-b0f0-eaaf818145e0 851 0 2026-03-06 03:02:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f7cf46454 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-55 calico-kube-controllers-f7cf46454-n5whp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali13ab6802d6d [] [] }} ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:25.953 [INFO][5240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.008 [INFO][5262] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" HandleID="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Workload="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.018 [INFO][5262] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" HandleID="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Workload="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002771a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-55", "pod":"calico-kube-controllers-f7cf46454-n5whp", "timestamp":"2026-03-06 03:03:26.008628794 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000122dc0)} Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.018 [INFO][5262] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.018 [INFO][5262] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.018 [INFO][5262] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.022 [INFO][5262] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.030 [INFO][5262] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.036 [INFO][5262] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.039 [INFO][5262] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.042 [INFO][5262] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.042 [INFO][5262] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.043 [INFO][5262] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26 Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.049 [INFO][5262] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.060 [INFO][5262] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.68/26] block=192.168.109.64/26 handle="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.061 [INFO][5262] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.68/26] handle="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" host="ip-172-31-19-55" Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.061 [INFO][5262] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:26.102403 containerd[1974]: 2026-03-06 03:03:26.061 [INFO][5262] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.68/26] IPv6=[] ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" HandleID="k8s-pod-network.5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Workload="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.104504 containerd[1974]: 2026-03-06 03:03:26.066 [INFO][5240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0", GenerateName:"calico-kube-controllers-f7cf46454-", Namespace:"calico-system", SelfLink:"", UID:"efe7d89f-43e9-412f-b0f0-eaaf818145e0", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7cf46454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"calico-kube-controllers-f7cf46454-n5whp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali13ab6802d6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:26.104504 containerd[1974]: 2026-03-06 03:03:26.066 [INFO][5240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.68/32] ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.104504 containerd[1974]: 2026-03-06 03:03:26.066 [INFO][5240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13ab6802d6d ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.104504 containerd[1974]: 2026-03-06 03:03:26.075 [INFO][5240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.104504 containerd[1974]: 2026-03-06 03:03:26.076 [INFO][5240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0", GenerateName:"calico-kube-controllers-f7cf46454-", Namespace:"calico-system", SelfLink:"", UID:"efe7d89f-43e9-412f-b0f0-eaaf818145e0", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7cf46454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26", Pod:"calico-kube-controllers-f7cf46454-n5whp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali13ab6802d6d", MAC:"d6:91:98:97:d8:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:26.104504 containerd[1974]: 2026-03-06 03:03:26.095 [INFO][5240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" Namespace="calico-system" Pod="calico-kube-controllers-f7cf46454-n5whp" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--kube--controllers--f7cf46454--n5whp-eth0" Mar 6 03:03:26.149755 containerd[1974]: time="2026-03-06T03:03:26.149633009Z" level=info msg="connecting to shim 5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26" address="unix:///run/containerd/s/d945d9a7716af2cafcfa54958a3e34f99d0b94da8676bf459255f6569afab9aa" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:26.212427 systemd[1]: Started cri-containerd-5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26.scope - libcontainer container 5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26. Mar 6 03:03:26.229756 systemd-networkd[1587]: calibfce7d28505: Link UP Mar 6 03:03:26.230735 systemd-networkd[1587]: calibfce7d28505: Gained carrier Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:25.962 [INFO][5239] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0 calico-apiserver-5989c68dd5- calico-system 6ae00c22-7641-4163-a70e-2653ea8e078a 855 0 2026-03-06 03:02:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5989c68dd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-55 calico-apiserver-5989c68dd5-n5d4q eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibfce7d28505 [] [] }} ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:25.962 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.009 [INFO][5267] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" HandleID="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Workload="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.020 [INFO][5267] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" HandleID="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Workload="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-55", "pod":"calico-apiserver-5989c68dd5-n5d4q", "timestamp":"2026-03-06 03:03:26.009681556 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038f1e0)} Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.020 [INFO][5267] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.061 [INFO][5267] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.061 [INFO][5267] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.125 [INFO][5267] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.139 [INFO][5267] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.154 [INFO][5267] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.160 [INFO][5267] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.171 [INFO][5267] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.176 [INFO][5267] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.179 [INFO][5267] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0 Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.189 [INFO][5267] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.204 [INFO][5267] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.69/26] block=192.168.109.64/26 handle="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.204 [INFO][5267] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.69/26] handle="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" host="ip-172-31-19-55" Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.206 [INFO][5267] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:26.258622 containerd[1974]: 2026-03-06 03:03:26.206 [INFO][5267] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.69/26] IPv6=[] ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" HandleID="k8s-pod-network.28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Workload="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.259899 containerd[1974]: 2026-03-06 03:03:26.223 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0", GenerateName:"calico-apiserver-5989c68dd5-", Namespace:"calico-system", SelfLink:"", UID:"6ae00c22-7641-4163-a70e-2653ea8e078a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5989c68dd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"calico-apiserver-5989c68dd5-n5d4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibfce7d28505", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:26.259899 containerd[1974]: 2026-03-06 03:03:26.223 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.69/32] ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.259899 containerd[1974]: 2026-03-06 03:03:26.223 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfce7d28505 ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.259899 containerd[1974]: 2026-03-06 03:03:26.231 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.259899 containerd[1974]: 2026-03-06 03:03:26.232 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0", GenerateName:"calico-apiserver-5989c68dd5-", Namespace:"calico-system", SelfLink:"", UID:"6ae00c22-7641-4163-a70e-2653ea8e078a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5989c68dd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0", Pod:"calico-apiserver-5989c68dd5-n5d4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibfce7d28505", MAC:"a6:cc:a9:42:dd:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:26.259899 containerd[1974]: 2026-03-06 03:03:26.252 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-n5d4q" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--n5d4q-eth0" Mar 6 03:03:26.305589 containerd[1974]: time="2026-03-06T03:03:26.305460147Z" level=info msg="connecting to shim 28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0" address="unix:///run/containerd/s/5e9f1fc5374bda941232d466cfe41419067f437e108f022bae37615ec3ec0337" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:26.368588 systemd[1]: Started cri-containerd-28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0.scope - libcontainer container 28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0. Mar 6 03:03:26.403202 containerd[1974]: time="2026-03-06T03:03:26.401327171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7cf46454-n5whp,Uid:efe7d89f-43e9-412f-b0f0-eaaf818145e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26\"" Mar 6 03:03:26.457530 containerd[1974]: time="2026-03-06T03:03:26.457456170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-n5d4q,Uid:6ae00c22-7641-4163-a70e-2653ea8e078a,Namespace:calico-system,Attempt:0,} returns sandbox id \"28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0\"" Mar 6 03:03:26.881611 containerd[1974]: time="2026-03-06T03:03:26.881231530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-vzsrv,Uid:5334d146-111c-4115-be1f-bc5585aaa496,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:26.908779 containerd[1974]: time="2026-03-06T03:03:26.906224673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pqjtt,Uid:d7578741-0fdd-4a51-9663-c9a667059e00,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:27.323488 systemd-networkd[1587]: cali012d4a3ad42: Link UP Mar 6 03:03:27.323707 systemd-networkd[1587]: cali012d4a3ad42: Gained carrier Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.123 [INFO][5418] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0 coredns-66bc5c9577- kube-system d7578741-0fdd-4a51-9663-c9a667059e00 860 0 2026-03-06 03:02:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-55 coredns-66bc5c9577-pqjtt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali012d4a3ad42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.123 [INFO][5418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.206 [INFO][5440] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" HandleID="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Workload="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.234 [INFO][5440] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" HandleID="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Workload="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037a160), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-55", "pod":"coredns-66bc5c9577-pqjtt", "timestamp":"2026-03-06 03:03:27.206267044 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003cc580)} Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.234 [INFO][5440] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.234 [INFO][5440] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.235 [INFO][5440] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.243 [INFO][5440] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.255 [INFO][5440] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.268 [INFO][5440] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.272 [INFO][5440] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.278 [INFO][5440] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.278 [INFO][5440] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.282 [INFO][5440] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.292 [INFO][5440] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.312 [INFO][5440] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.70/26] block=192.168.109.64/26 handle="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.312 [INFO][5440] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.70/26] handle="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" host="ip-172-31-19-55" Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.312 [INFO][5440] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:27.367997 containerd[1974]: 2026-03-06 03:03:27.312 [INFO][5440] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.70/26] IPv6=[] ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" HandleID="k8s-pod-network.6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Workload="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.369061 containerd[1974]: 2026-03-06 03:03:27.318 [INFO][5418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d7578741-0fdd-4a51-9663-c9a667059e00", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"coredns-66bc5c9577-pqjtt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali012d4a3ad42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:27.369061 containerd[1974]: 2026-03-06 03:03:27.318 [INFO][5418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.70/32] ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.369061 containerd[1974]: 2026-03-06 03:03:27.319 [INFO][5418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali012d4a3ad42 ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.369061 containerd[1974]: 2026-03-06 03:03:27.322 [INFO][5418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.369061 containerd[1974]: 2026-03-06 03:03:27.325 [INFO][5418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d7578741-0fdd-4a51-9663-c9a667059e00", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb", Pod:"coredns-66bc5c9577-pqjtt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali012d4a3ad42", MAC:"86:5c:70:ea:e2:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:27.369061 containerd[1974]: 2026-03-06 03:03:27.358 [INFO][5418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" Namespace="kube-system" Pod="coredns-66bc5c9577-pqjtt" WorkloadEndpoint="ip--172--31--19--55-k8s-coredns--66bc5c9577--pqjtt-eth0" Mar 6 03:03:27.466304 systemd-networkd[1587]: cali1df8e0d50a0: Link UP Mar 6 03:03:27.466532 systemd-networkd[1587]: cali1df8e0d50a0: Gained carrier Mar 6 03:03:27.478021 containerd[1974]: time="2026-03-06T03:03:27.477962545Z" level=info msg="connecting to shim 6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb" address="unix:///run/containerd/s/ec01a298e8dad87cf3ea1e6c810771a3ba7435fc1dc5baacffe6f6beb6929d66" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.087 [INFO][5408] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0 calico-apiserver-5989c68dd5- calico-system 5334d146-111c-4115-be1f-bc5585aaa496 859 0 2026-03-06 03:02:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5989c68dd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-55 calico-apiserver-5989c68dd5-vzsrv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1df8e0d50a0 [] [] }} ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.087 [INFO][5408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.225 [INFO][5435] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" HandleID="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Workload="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.263 [INFO][5435] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" HandleID="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Workload="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003809a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-55", "pod":"calico-apiserver-5989c68dd5-vzsrv", "timestamp":"2026-03-06 03:03:27.225818965 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c0000)} Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.263 [INFO][5435] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.314 [INFO][5435] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.314 [INFO][5435] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.347 [INFO][5435] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.372 [INFO][5435] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.383 [INFO][5435] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.388 [INFO][5435] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.394 [INFO][5435] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.395 [INFO][5435] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.400 [INFO][5435] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9 Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.411 [INFO][5435] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.450 [INFO][5435] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.71/26] block=192.168.109.64/26 handle="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.450 [INFO][5435] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.71/26] handle="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" host="ip-172-31-19-55" Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.450 [INFO][5435] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:27.507470 containerd[1974]: 2026-03-06 03:03:27.450 [INFO][5435] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.71/26] IPv6=[] ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" HandleID="k8s-pod-network.262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Workload="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.511815 containerd[1974]: 2026-03-06 03:03:27.460 [INFO][5408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0", GenerateName:"calico-apiserver-5989c68dd5-", Namespace:"calico-system", SelfLink:"", UID:"5334d146-111c-4115-be1f-bc5585aaa496", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5989c68dd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"calico-apiserver-5989c68dd5-vzsrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1df8e0d50a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:27.511815 containerd[1974]: 2026-03-06 03:03:27.460 [INFO][5408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.71/32] ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.511815 containerd[1974]: 2026-03-06 03:03:27.461 [INFO][5408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1df8e0d50a0 ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.511815 containerd[1974]: 2026-03-06 03:03:27.463 [INFO][5408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.511815 containerd[1974]: 2026-03-06 03:03:27.464 [INFO][5408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0", GenerateName:"calico-apiserver-5989c68dd5-", Namespace:"calico-system", SelfLink:"", UID:"5334d146-111c-4115-be1f-bc5585aaa496", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5989c68dd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9", Pod:"calico-apiserver-5989c68dd5-vzsrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1df8e0d50a0", MAC:"9e:67:90:5e:7d:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:27.511815 containerd[1974]: 2026-03-06 03:03:27.494 [INFO][5408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" Namespace="calico-system" Pod="calico-apiserver-5989c68dd5-vzsrv" WorkloadEndpoint="ip--172--31--19--55-k8s-calico--apiserver--5989c68dd5--vzsrv-eth0" Mar 6 03:03:27.525747 systemd-networkd[1587]: calibfce7d28505: Gained IPv6LL Mar 6 03:03:27.554763 systemd[1]: Started cri-containerd-6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb.scope - libcontainer container 6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb. Mar 6 03:03:27.683554 containerd[1974]: time="2026-03-06T03:03:27.683422918Z" level=info msg="connecting to shim 262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9" address="unix:///run/containerd/s/801922bcf894718f724e9a524a72d774f5d59ea74d1c51839868a713ceb5f4fa" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:27.717085 systemd-networkd[1587]: cali13ab6802d6d: Gained IPv6LL Mar 6 03:03:27.724039 containerd[1974]: time="2026-03-06T03:03:27.723886064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pqjtt,Uid:d7578741-0fdd-4a51-9663-c9a667059e00,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb\"" Mar 6 03:03:27.741407 containerd[1974]: time="2026-03-06T03:03:27.740301537Z" level=info msg="CreateContainer within sandbox \"6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 03:03:27.771939 systemd[1]: Started cri-containerd-262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9.scope - libcontainer container 262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9. Mar 6 03:03:27.813870 containerd[1974]: time="2026-03-06T03:03:27.813798407Z" level=info msg="Container 0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:27.840264 containerd[1974]: time="2026-03-06T03:03:27.840150893Z" level=info msg="CreateContainer within sandbox \"6d27a5f8b7bb877c43e6e681c08e936c2739a3bbef9a9302d7c9e1bcb631c6fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611\"" Mar 6 03:03:27.844795 containerd[1974]: time="2026-03-06T03:03:27.844714245Z" level=info msg="StartContainer for \"0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611\"" Mar 6 03:03:27.845973 containerd[1974]: time="2026-03-06T03:03:27.845936366Z" level=info msg="connecting to shim 0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611" address="unix:///run/containerd/s/ec01a298e8dad87cf3ea1e6c810771a3ba7435fc1dc5baacffe6f6beb6929d66" protocol=ttrpc version=3 Mar 6 03:03:27.906127 containerd[1974]: time="2026-03-06T03:03:27.904405595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5989c68dd5-vzsrv,Uid:5334d146-111c-4115-be1f-bc5585aaa496,Namespace:calico-system,Attempt:0,} returns sandbox id \"262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9\"" Mar 6 03:03:27.923612 containerd[1974]: time="2026-03-06T03:03:27.923238059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wnzw,Uid:3b6ac159-4573-487d-821b-a2953dcf4b25,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:27.926771 systemd[1]: Started cri-containerd-0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611.scope - libcontainer container 0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611. Mar 6 03:03:28.046500 containerd[1974]: time="2026-03-06T03:03:28.046398706Z" level=info msg="StartContainer for \"0b96a6f94fcbe94a02e12886cd4e95e0024058807a2c9a3d7a41d0589dc6c611\" returns successfully" Mar 6 03:03:28.330918 systemd-networkd[1587]: calia38d8a90369: Link UP Mar 6 03:03:28.339724 systemd-networkd[1587]: calia38d8a90369: Gained carrier Mar 6 03:03:28.363335 kubelet[3516]: I0306 03:03:28.363258 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pqjtt" podStartSLOduration=61.363236784 podStartE2EDuration="1m1.363236784s" podCreationTimestamp="2026-03-06 03:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:03:28.317284981 +0000 UTC m=+67.585351480" watchObservedRunningTime="2026-03-06 03:03:28.363236784 +0000 UTC m=+67.631303283" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.069 [INFO][5591] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0 csi-node-driver- calico-system 3b6ac159-4573-487d-821b-a2953dcf4b25 688 0 2026-03-06 03:02:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-55 csi-node-driver-5wnzw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia38d8a90369 [] [] }} ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.070 [INFO][5591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.183 [INFO][5621] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" HandleID="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Workload="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.198 [INFO][5621] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" HandleID="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Workload="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-55", "pod":"csi-node-driver-5wnzw", "timestamp":"2026-03-06 03:03:28.183382096 +0000 UTC"}, Hostname:"ip-172-31-19-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000eb1e0)} Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.198 [INFO][5621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.198 [INFO][5621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.198 [INFO][5621] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-55' Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.202 [INFO][5621] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.213 [INFO][5621] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.227 [INFO][5621] ipam/ipam.go 526: Trying affinity for 192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.232 [INFO][5621] ipam/ipam.go 160: Attempting to load block cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.235 [INFO][5621] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.237 [INFO][5621] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.247 [INFO][5621] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.277 [INFO][5621] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.301 [INFO][5621] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.109.72/26] block=192.168.109.64/26 handle="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.302 [INFO][5621] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.109.72/26] handle="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" host="ip-172-31-19-55" Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.303 [INFO][5621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:28.379222 containerd[1974]: 2026-03-06 03:03:28.304 [INFO][5621] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.109.72/26] IPv6=[] ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" HandleID="k8s-pod-network.177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Workload="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.382145 containerd[1974]: 2026-03-06 03:03:28.312 [INFO][5591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b6ac159-4573-487d-821b-a2953dcf4b25", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"", Pod:"csi-node-driver-5wnzw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia38d8a90369", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:28.382145 containerd[1974]: 2026-03-06 03:03:28.312 [INFO][5591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.72/32] ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.382145 containerd[1974]: 2026-03-06 03:03:28.312 [INFO][5591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia38d8a90369 ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.382145 containerd[1974]: 2026-03-06 03:03:28.339 [INFO][5591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.382145 containerd[1974]: 2026-03-06 03:03:28.346 [INFO][5591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b6ac159-4573-487d-821b-a2953dcf4b25", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-55", ContainerID:"177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e", Pod:"csi-node-driver-5wnzw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia38d8a90369", MAC:"e6:58:20:de:3c:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:28.382145 containerd[1974]: 2026-03-06 03:03:28.370 [INFO][5591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" Namespace="calico-system" Pod="csi-node-driver-5wnzw" WorkloadEndpoint="ip--172--31--19--55-k8s-csi--node--driver--5wnzw-eth0" Mar 6 03:03:28.458828 containerd[1974]: time="2026-03-06T03:03:28.458775588Z" level=info msg="connecting to shim 177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e" address="unix:///run/containerd/s/86a4153b5b036c146725b7035851ac09c22fd7523022c3c0e3786bd351d41d38" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:28.511623 systemd[1]: Started cri-containerd-177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e.scope - libcontainer container 177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e. Mar 6 03:03:28.612495 systemd-networkd[1587]: cali1df8e0d50a0: Gained IPv6LL Mar 6 03:03:28.612833 systemd-networkd[1587]: cali012d4a3ad42: Gained IPv6LL Mar 6 03:03:28.671691 containerd[1974]: time="2026-03-06T03:03:28.671578151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wnzw,Uid:3b6ac159-4573-487d-821b-a2953dcf4b25,Namespace:calico-system,Attempt:0,} returns sandbox id \"177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e\"" Mar 6 03:03:28.902191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261574199.mount: Deactivated successfully. Mar 6 03:03:29.556530 containerd[1974]: time="2026-03-06T03:03:29.556468728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:29.558360 containerd[1974]: time="2026-03-06T03:03:29.558224205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 03:03:29.560961 containerd[1974]: time="2026-03-06T03:03:29.560919623Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:29.564846 containerd[1974]: time="2026-03-06T03:03:29.564515688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:29.565402 containerd[1974]: time="2026-03-06T03:03:29.565368219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.343828227s" Mar 6 03:03:29.565499 containerd[1974]: time="2026-03-06T03:03:29.565410451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 03:03:29.566796 containerd[1974]: time="2026-03-06T03:03:29.566720496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 03:03:29.573775 containerd[1974]: time="2026-03-06T03:03:29.573736064Z" level=info msg="CreateContainer within sandbox \"296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 03:03:29.699131 containerd[1974]: time="2026-03-06T03:03:29.698257414Z" level=info msg="Container c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:29.715762 containerd[1974]: time="2026-03-06T03:03:29.715711873Z" level=info msg="CreateContainer within sandbox \"296f3c44208bdb5483d487ade6375dd519ca32a64b34993ca7dac50bf6f62ae0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373\"" Mar 6 03:03:29.717322 containerd[1974]: time="2026-03-06T03:03:29.717285925Z" level=info msg="StartContainer for \"c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373\"" Mar 6 03:03:29.719277 containerd[1974]: time="2026-03-06T03:03:29.719240170Z" level=info msg="connecting to shim c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373" address="unix:///run/containerd/s/792658635613af17a414cc238f1ec9a23d0f8900557103f560043647a87202bf" protocol=ttrpc version=3 Mar 6 03:03:29.764405 systemd-networkd[1587]: calia38d8a90369: Gained IPv6LL Mar 6 03:03:29.768882 systemd[1]: Started cri-containerd-c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373.scope - libcontainer container c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373. Mar 6 03:03:29.904064 containerd[1974]: time="2026-03-06T03:03:29.904024942Z" level=info msg="StartContainer for \"c41584c2cf16fd41a6e348e73d2136ae8739a659679c909763c5f66c18470373\" returns successfully" Mar 6 03:03:30.320127 systemd[1]: Started sshd@10-172.31.19.55:22-68.220.241.50:38126.service - OpenSSH per-connection server daemon (68.220.241.50:38126). Mar 6 03:03:30.887597 sshd[5776]: Accepted publickey for core from 68.220.241.50 port 38126 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:30.891754 sshd-session[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:30.898337 systemd-logind[1959]: New session 10 of user core. Mar 6 03:03:30.904455 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 03:03:31.489133 kubelet[3516]: I0306 03:03:31.488410 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-5dnxg" podStartSLOduration=44.142610968 podStartE2EDuration="49.488382765s" podCreationTimestamp="2026-03-06 03:02:42 +0000 UTC" firstStartedPulling="2026-03-06 03:03:24.220720814 +0000 UTC m=+63.488787303" lastFinishedPulling="2026-03-06 03:03:29.566492608 +0000 UTC m=+68.834559100" observedRunningTime="2026-03-06 03:03:30.32150862 +0000 UTC m=+69.589575119" watchObservedRunningTime="2026-03-06 03:03:31.488382765 +0000 UTC m=+70.756449275" Mar 6 03:03:32.227128 sshd[5799]: Connection closed by 68.220.241.50 port 38126 Mar 6 03:03:32.228271 sshd-session[5776]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:32.243484 systemd[1]: sshd@10-172.31.19.55:22-68.220.241.50:38126.service: Deactivated successfully. Mar 6 03:03:32.250716 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 03:03:32.254244 systemd-logind[1959]: Session 10 logged out. Waiting for processes to exit. Mar 6 03:03:32.256566 systemd-logind[1959]: Removed session 10. Mar 6 03:03:32.294451 ntpd[1949]: Listen normally on 9 cali7ef0b620530 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 6 03:03:32.294512 ntpd[1949]: Listen normally on 10 cali1351078169d [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 9 cali7ef0b620530 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 10 cali1351078169d [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 11 cali13ab6802d6d [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 12 calibfce7d28505 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 13 cali012d4a3ad42 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 14 cali1df8e0d50a0 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 03:03:32.295483 ntpd[1949]: 6 Mar 03:03:32 ntpd[1949]: Listen normally on 15 calia38d8a90369 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 03:03:32.294538 ntpd[1949]: Listen normally on 11 cali13ab6802d6d [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 03:03:32.294564 ntpd[1949]: Listen normally on 12 calibfce7d28505 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 03:03:32.294589 ntpd[1949]: Listen normally on 13 cali012d4a3ad42 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 03:03:32.294617 ntpd[1949]: Listen normally on 14 cali1df8e0d50a0 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 03:03:32.294642 ntpd[1949]: Listen normally on 15 calia38d8a90369 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 03:03:33.380666 containerd[1974]: time="2026-03-06T03:03:33.380606517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:33.383189 containerd[1974]: time="2026-03-06T03:03:33.383143452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 03:03:33.385407 containerd[1974]: time="2026-03-06T03:03:33.385342054Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:33.388862 containerd[1974]: time="2026-03-06T03:03:33.388793607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:33.389716 containerd[1974]: time="2026-03-06T03:03:33.389479255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.822687269s" Mar 6 03:03:33.389716 containerd[1974]: time="2026-03-06T03:03:33.389519295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 03:03:33.391481 containerd[1974]: time="2026-03-06T03:03:33.391256000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 03:03:33.509733 containerd[1974]: time="2026-03-06T03:03:33.509685328Z" level=info msg="CreateContainer within sandbox \"5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 03:03:33.538074 containerd[1974]: time="2026-03-06T03:03:33.537089884Z" level=info msg="Container e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:33.546566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085946284.mount: Deactivated successfully. Mar 6 03:03:33.577657 containerd[1974]: time="2026-03-06T03:03:33.577612273Z" level=info msg="CreateContainer within sandbox \"5203c494d9aae8f7e32a959cb23ddab6d7ce97edf696e4c7a588470a5d7b5b26\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213\"" Mar 6 03:03:33.578760 containerd[1974]: time="2026-03-06T03:03:33.578711269Z" level=info msg="StartContainer for \"e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213\"" Mar 6 03:03:33.580547 containerd[1974]: time="2026-03-06T03:03:33.580508008Z" level=info msg="connecting to shim e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213" address="unix:///run/containerd/s/d945d9a7716af2cafcfa54958a3e34f99d0b94da8676bf459255f6569afab9aa" protocol=ttrpc version=3 Mar 6 03:03:33.654343 systemd[1]: Started cri-containerd-e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213.scope - libcontainer container e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213. Mar 6 03:03:33.767469 containerd[1974]: time="2026-03-06T03:03:33.763814083Z" level=info msg="StartContainer for \"e51bcfbf3d80a5f6bb34b67527d593f608d3b3180c39dbbdd785ab7cebc94213\" returns successfully" Mar 6 03:03:34.327803 kubelet[3516]: I0306 03:03:34.327740 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f7cf46454-n5whp" podStartSLOduration=44.343517625 podStartE2EDuration="51.327721361s" podCreationTimestamp="2026-03-06 03:02:43 +0000 UTC" firstStartedPulling="2026-03-06 03:03:26.406684664 +0000 UTC m=+65.674751155" lastFinishedPulling="2026-03-06 03:03:33.390888411 +0000 UTC m=+72.658954891" observedRunningTime="2026-03-06 03:03:34.327180774 +0000 UTC m=+73.595247272" watchObservedRunningTime="2026-03-06 03:03:34.327721361 +0000 UTC m=+73.595787857" Mar 6 03:03:36.665385 containerd[1974]: time="2026-03-06T03:03:36.665335390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:36.666555 containerd[1974]: time="2026-03-06T03:03:36.666520033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 03:03:36.667783 containerd[1974]: time="2026-03-06T03:03:36.667728325Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:36.670610 containerd[1974]: time="2026-03-06T03:03:36.670575780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:36.671378 containerd[1974]: time="2026-03-06T03:03:36.671341315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.280055667s" Mar 6 03:03:36.671378 containerd[1974]: time="2026-03-06T03:03:36.671375354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 03:03:36.719354 containerd[1974]: time="2026-03-06T03:03:36.719222213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 03:03:36.740289 containerd[1974]: time="2026-03-06T03:03:36.740084620Z" level=info msg="CreateContainer within sandbox \"28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 03:03:36.759154 containerd[1974]: time="2026-03-06T03:03:36.757918786Z" level=info msg="Container 9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:36.767946 containerd[1974]: time="2026-03-06T03:03:36.767895707Z" level=info msg="CreateContainer within sandbox \"28917db1dc4fdc84a91f47f3cd125a86bd85e3ade8fce6d399752d6236af50c0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c\"" Mar 6 03:03:36.768895 containerd[1974]: time="2026-03-06T03:03:36.768871118Z" level=info msg="StartContainer for \"9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c\"" Mar 6 03:03:36.770514 containerd[1974]: time="2026-03-06T03:03:36.770488180Z" level=info msg="connecting to shim 9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c" address="unix:///run/containerd/s/5e9f1fc5374bda941232d466cfe41419067f437e108f022bae37615ec3ec0337" protocol=ttrpc version=3 Mar 6 03:03:36.806353 systemd[1]: Started cri-containerd-9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c.scope - libcontainer container 9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c. Mar 6 03:03:36.874217 containerd[1974]: time="2026-03-06T03:03:36.874096039Z" level=info msg="StartContainer for \"9926db28e298899c5790852e50f5fedc18c1116843fc5903545c24f88674392c\" returns successfully" Mar 6 03:03:37.062206 containerd[1974]: time="2026-03-06T03:03:37.061719582Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:37.064127 containerd[1974]: time="2026-03-06T03:03:37.064083719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 03:03:37.067035 containerd[1974]: time="2026-03-06T03:03:37.066981440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 347.685053ms" Mar 6 03:03:37.067035 containerd[1974]: time="2026-03-06T03:03:37.067018777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 03:03:37.069300 containerd[1974]: time="2026-03-06T03:03:37.068634243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 03:03:37.079273 containerd[1974]: time="2026-03-06T03:03:37.079231378Z" level=info msg="CreateContainer within sandbox \"262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 03:03:37.244006 containerd[1974]: time="2026-03-06T03:03:37.221188336Z" level=info msg="Container 0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:37.257844 containerd[1974]: time="2026-03-06T03:03:37.257799997Z" level=info msg="CreateContainer within sandbox \"262db90161460bf8cd20349695ce6c4cca53af75e1d7fb302ae69e88606d7fc9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed\"" Mar 6 03:03:37.259979 containerd[1974]: time="2026-03-06T03:03:37.259939018Z" level=info msg="StartContainer for \"0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed\"" Mar 6 03:03:37.261917 containerd[1974]: time="2026-03-06T03:03:37.261878653Z" level=info msg="connecting to shim 0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed" address="unix:///run/containerd/s/801922bcf894718f724e9a524a72d774f5d59ea74d1c51839868a713ceb5f4fa" protocol=ttrpc version=3 Mar 6 03:03:37.300379 systemd[1]: Started cri-containerd-0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed.scope - libcontainer container 0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed. Mar 6 03:03:37.325443 systemd[1]: Started sshd@11-172.31.19.55:22-68.220.241.50:40260.service - OpenSSH per-connection server daemon (68.220.241.50:40260). Mar 6 03:03:37.524346 containerd[1974]: time="2026-03-06T03:03:37.524305691Z" level=info msg="StartContainer for \"0335c9b8fce210ef9b7c9af45560f9a047e69bb68a87b7171bcb839f41ebf1ed\" returns successfully" Mar 6 03:03:37.555876 kubelet[3516]: I0306 03:03:37.555802 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5989c68dd5-n5d4q" podStartSLOduration=45.296206811 podStartE2EDuration="55.555775036s" podCreationTimestamp="2026-03-06 03:02:42 +0000 UTC" firstStartedPulling="2026-03-06 03:03:26.459423832 +0000 UTC m=+65.727490308" lastFinishedPulling="2026-03-06 03:03:36.718992032 +0000 UTC m=+75.987058533" observedRunningTime="2026-03-06 03:03:37.550486837 +0000 UTC m=+76.818553328" watchObservedRunningTime="2026-03-06 03:03:37.555775036 +0000 UTC m=+76.823841534" Mar 6 03:03:37.927604 sshd[5976]: Accepted publickey for core from 68.220.241.50 port 40260 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:37.933267 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:37.944176 systemd-logind[1959]: New session 11 of user core. Mar 6 03:03:37.949328 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 03:03:38.580383 kubelet[3516]: I0306 03:03:38.580307 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5989c68dd5-vzsrv" podStartSLOduration=47.428712151 podStartE2EDuration="56.580160097s" podCreationTimestamp="2026-03-06 03:02:42 +0000 UTC" firstStartedPulling="2026-03-06 03:03:27.916629051 +0000 UTC m=+67.184695532" lastFinishedPulling="2026-03-06 03:03:37.068076988 +0000 UTC m=+76.336143478" observedRunningTime="2026-03-06 03:03:38.580033073 +0000 UTC m=+77.848099563" watchObservedRunningTime="2026-03-06 03:03:38.580160097 +0000 UTC m=+77.848226596" Mar 6 03:03:39.585315 sshd[5996]: Connection closed by 68.220.241.50 port 40260 Mar 6 03:03:39.586076 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:39.611845 systemd[1]: sshd@11-172.31.19.55:22-68.220.241.50:40260.service: Deactivated successfully. Mar 6 03:03:39.617295 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 03:03:39.621605 systemd-logind[1959]: Session 11 logged out. Waiting for processes to exit. Mar 6 03:03:39.625307 systemd-logind[1959]: Removed session 11. Mar 6 03:03:39.644501 containerd[1974]: time="2026-03-06T03:03:39.644453028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:39.649590 containerd[1974]: time="2026-03-06T03:03:39.649523491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 03:03:39.650164 kubelet[3516]: I0306 03:03:39.649852 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:39.653123 containerd[1974]: time="2026-03-06T03:03:39.651940056Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:39.659571 containerd[1974]: time="2026-03-06T03:03:39.659528025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:39.662929 containerd[1974]: time="2026-03-06T03:03:39.662484341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.593818426s" Mar 6 03:03:39.663182 containerd[1974]: time="2026-03-06T03:03:39.663027261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 03:03:39.758024 containerd[1974]: time="2026-03-06T03:03:39.757976507Z" level=info msg="CreateContainer within sandbox \"177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 03:03:39.863132 containerd[1974]: time="2026-03-06T03:03:39.862222363Z" level=info msg="Container 29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:39.905456 containerd[1974]: time="2026-03-06T03:03:39.905392260Z" level=info msg="CreateContainer within sandbox \"177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0\"" Mar 6 03:03:39.909326 containerd[1974]: time="2026-03-06T03:03:39.909043271Z" level=info msg="StartContainer for \"29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0\"" Mar 6 03:03:39.915046 containerd[1974]: time="2026-03-06T03:03:39.914997154Z" level=info msg="connecting to shim 29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0" address="unix:///run/containerd/s/86a4153b5b036c146725b7035851ac09c22fd7523022c3c0e3786bd351d41d38" protocol=ttrpc version=3 Mar 6 03:03:39.975334 systemd[1]: Started cri-containerd-29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0.scope - libcontainer container 29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0. Mar 6 03:03:40.092838 containerd[1974]: time="2026-03-06T03:03:40.092790809Z" level=info msg="StartContainer for \"29e134350830eedeb35beab845d29d3e0a34a71fa8e3ad52084187aad799ceb0\" returns successfully" Mar 6 03:03:40.110083 containerd[1974]: time="2026-03-06T03:03:40.110035634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 03:03:41.951869 containerd[1974]: time="2026-03-06T03:03:41.951807322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:41.954411 containerd[1974]: time="2026-03-06T03:03:41.954354749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 03:03:41.957261 containerd[1974]: time="2026-03-06T03:03:41.957028266Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:41.962194 containerd[1974]: time="2026-03-06T03:03:41.961561622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:41.962507 containerd[1974]: time="2026-03-06T03:03:41.962466427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.852376304s" Mar 6 03:03:41.962634 containerd[1974]: time="2026-03-06T03:03:41.962612850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 03:03:41.991804 containerd[1974]: time="2026-03-06T03:03:41.991753985Z" level=info msg="CreateContainer within sandbox \"177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 03:03:42.009601 containerd[1974]: time="2026-03-06T03:03:42.009535441Z" level=info msg="Container dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:42.033066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757967397.mount: Deactivated successfully. Mar 6 03:03:42.055077 containerd[1974]: time="2026-03-06T03:03:42.055020194Z" level=info msg="CreateContainer within sandbox \"177fb7d9c0107febf07f1793901a75b5b1c306e928d3f2c252f588b0974ab09e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc\"" Mar 6 03:03:42.058121 containerd[1974]: time="2026-03-06T03:03:42.057541872Z" level=info msg="StartContainer for \"dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc\"" Mar 6 03:03:42.068423 containerd[1974]: time="2026-03-06T03:03:42.068374048Z" level=info msg="connecting to shim dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc" address="unix:///run/containerd/s/86a4153b5b036c146725b7035851ac09c22fd7523022c3c0e3786bd351d41d38" protocol=ttrpc version=3 Mar 6 03:03:42.104541 systemd[1]: Started cri-containerd-dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc.scope - libcontainer container dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc. Mar 6 03:03:42.249814 containerd[1974]: time="2026-03-06T03:03:42.248573881Z" level=info msg="StartContainer for \"dc8ba10b4035116b9bddf490f1c83186778726d1f99ff86159f5aec0047a01dc\" returns successfully" Mar 6 03:03:42.755138 kubelet[3516]: I0306 03:03:42.754717 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:42.790009 kubelet[3516]: I0306 03:03:42.789778 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5wnzw" podStartSLOduration=46.491992586 podStartE2EDuration="59.789741322s" podCreationTimestamp="2026-03-06 03:02:43 +0000 UTC" firstStartedPulling="2026-03-06 03:03:28.674794243 +0000 UTC m=+67.942860735" lastFinishedPulling="2026-03-06 03:03:41.972542995 +0000 UTC m=+81.240609471" observedRunningTime="2026-03-06 03:03:42.683932293 +0000 UTC m=+81.951998794" watchObservedRunningTime="2026-03-06 03:03:42.789741322 +0000 UTC m=+82.057807821" Mar 6 03:03:43.183448 kubelet[3516]: I0306 03:03:43.181772 3516 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 03:03:43.183616 kubelet[3516]: I0306 03:03:43.183471 3516 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 03:03:44.686531 systemd[1]: Started sshd@12-172.31.19.55:22-68.220.241.50:54156.service - OpenSSH per-connection server daemon (68.220.241.50:54156). Mar 6 03:03:45.265679 sshd[6110]: Accepted publickey for core from 68.220.241.50 port 54156 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:45.269853 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:45.277528 systemd-logind[1959]: New session 12 of user core. Mar 6 03:03:45.284399 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 03:03:46.263040 sshd[6116]: Connection closed by 68.220.241.50 port 54156 Mar 6 03:03:46.265364 sshd-session[6110]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:46.269692 systemd[1]: sshd@12-172.31.19.55:22-68.220.241.50:54156.service: Deactivated successfully. Mar 6 03:03:46.273054 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 03:03:46.275139 systemd-logind[1959]: Session 12 logged out. Waiting for processes to exit. Mar 6 03:03:46.276953 systemd-logind[1959]: Removed session 12. Mar 6 03:03:51.356749 systemd[1]: Started sshd@13-172.31.19.55:22-68.220.241.50:54166.service - OpenSSH per-connection server daemon (68.220.241.50:54166). Mar 6 03:03:51.898781 sshd[6198]: Accepted publickey for core from 68.220.241.50 port 54166 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:51.901990 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:51.911061 systemd-logind[1959]: New session 13 of user core. Mar 6 03:03:51.918400 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 03:03:52.703447 sshd[6202]: Connection closed by 68.220.241.50 port 54166 Mar 6 03:03:52.704460 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:52.710057 systemd[1]: sshd@13-172.31.19.55:22-68.220.241.50:54166.service: Deactivated successfully. Mar 6 03:03:52.714005 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 03:03:52.715009 systemd-logind[1959]: Session 13 logged out. Waiting for processes to exit. Mar 6 03:03:52.716696 systemd-logind[1959]: Removed session 13. Mar 6 03:03:52.794969 systemd[1]: Started sshd@14-172.31.19.55:22-68.220.241.50:47614.service - OpenSSH per-connection server daemon (68.220.241.50:47614). Mar 6 03:03:53.255525 sshd[6215]: Accepted publickey for core from 68.220.241.50 port 47614 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:53.257237 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:53.263245 systemd-logind[1959]: New session 14 of user core. Mar 6 03:03:53.270319 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 03:03:53.677018 sshd[6218]: Connection closed by 68.220.241.50 port 47614 Mar 6 03:03:53.677526 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:53.688386 systemd-logind[1959]: Session 14 logged out. Waiting for processes to exit. Mar 6 03:03:53.689310 systemd[1]: sshd@14-172.31.19.55:22-68.220.241.50:47614.service: Deactivated successfully. Mar 6 03:03:53.691993 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 03:03:53.698159 systemd-logind[1959]: Removed session 14. Mar 6 03:03:53.765497 systemd[1]: Started sshd@15-172.31.19.55:22-68.220.241.50:47624.service - OpenSSH per-connection server daemon (68.220.241.50:47624). Mar 6 03:03:54.212590 sshd[6229]: Accepted publickey for core from 68.220.241.50 port 47624 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:54.214239 sshd-session[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:54.220549 systemd-logind[1959]: New session 15 of user core. Mar 6 03:03:54.226359 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 03:03:54.546947 sshd[6232]: Connection closed by 68.220.241.50 port 47624 Mar 6 03:03:54.547528 sshd-session[6229]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:54.553802 systemd[1]: sshd@15-172.31.19.55:22-68.220.241.50:47624.service: Deactivated successfully. Mar 6 03:03:54.556505 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 03:03:54.558016 systemd-logind[1959]: Session 15 logged out. Waiting for processes to exit. Mar 6 03:03:54.560205 systemd-logind[1959]: Removed session 15. Mar 6 03:03:59.636986 systemd[1]: Started sshd@16-172.31.19.55:22-68.220.241.50:47630.service - OpenSSH per-connection server daemon (68.220.241.50:47630). Mar 6 03:04:00.173006 sshd[6260]: Accepted publickey for core from 68.220.241.50 port 47630 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:00.185552 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:00.193607 systemd-logind[1959]: New session 16 of user core. Mar 6 03:04:00.203970 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 03:04:00.695009 sshd[6264]: Connection closed by 68.220.241.50 port 47630 Mar 6 03:04:00.696017 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:00.702640 systemd[1]: sshd@16-172.31.19.55:22-68.220.241.50:47630.service: Deactivated successfully. Mar 6 03:04:00.708287 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 03:04:00.710065 systemd-logind[1959]: Session 16 logged out. Waiting for processes to exit. Mar 6 03:04:00.712839 systemd-logind[1959]: Removed session 16. Mar 6 03:04:00.792340 systemd[1]: Started sshd@17-172.31.19.55:22-68.220.241.50:47634.service - OpenSSH per-connection server daemon (68.220.241.50:47634). Mar 6 03:04:01.251238 sshd[6276]: Accepted publickey for core from 68.220.241.50 port 47634 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:01.254466 sshd-session[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:01.262550 systemd-logind[1959]: New session 17 of user core. Mar 6 03:04:01.276925 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 03:04:02.919252 sshd[6279]: Connection closed by 68.220.241.50 port 47634 Mar 6 03:04:02.930350 sshd-session[6276]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:02.951518 systemd[1]: sshd@17-172.31.19.55:22-68.220.241.50:47634.service: Deactivated successfully. Mar 6 03:04:02.958404 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 03:04:02.959502 systemd-logind[1959]: Session 17 logged out. Waiting for processes to exit. Mar 6 03:04:02.964030 systemd-logind[1959]: Removed session 17. Mar 6 03:04:03.014688 systemd[1]: Started sshd@18-172.31.19.55:22-68.220.241.50:46600.service - OpenSSH per-connection server daemon (68.220.241.50:46600). Mar 6 03:04:03.546005 sshd[6312]: Accepted publickey for core from 68.220.241.50 port 46600 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:03.547519 sshd-session[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:03.554997 systemd-logind[1959]: New session 18 of user core. Mar 6 03:04:03.559347 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 03:04:04.781768 sshd[6315]: Connection closed by 68.220.241.50 port 46600 Mar 6 03:04:04.782361 sshd-session[6312]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:04.789177 systemd-logind[1959]: Session 18 logged out. Waiting for processes to exit. Mar 6 03:04:04.790050 systemd[1]: sshd@18-172.31.19.55:22-68.220.241.50:46600.service: Deactivated successfully. Mar 6 03:04:04.793612 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 03:04:04.795583 systemd-logind[1959]: Removed session 18. Mar 6 03:04:04.871521 systemd[1]: Started sshd@19-172.31.19.55:22-68.220.241.50:46610.service - OpenSSH per-connection server daemon (68.220.241.50:46610). Mar 6 03:04:05.319681 sshd[6365]: Accepted publickey for core from 68.220.241.50 port 46610 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:05.321482 sshd-session[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:05.327757 systemd-logind[1959]: New session 19 of user core. Mar 6 03:04:05.341366 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 03:04:06.293290 sshd[6368]: Connection closed by 68.220.241.50 port 46610 Mar 6 03:04:06.294662 sshd-session[6365]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:06.303614 systemd[1]: sshd@19-172.31.19.55:22-68.220.241.50:46610.service: Deactivated successfully. Mar 6 03:04:06.306638 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 03:04:06.308411 systemd-logind[1959]: Session 19 logged out. Waiting for processes to exit. Mar 6 03:04:06.310829 systemd-logind[1959]: Removed session 19. Mar 6 03:04:06.387558 systemd[1]: Started sshd@20-172.31.19.55:22-68.220.241.50:46622.service - OpenSSH per-connection server daemon (68.220.241.50:46622). Mar 6 03:04:06.862673 sshd[6380]: Accepted publickey for core from 68.220.241.50 port 46622 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:06.864791 sshd-session[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:06.871219 systemd-logind[1959]: New session 20 of user core. Mar 6 03:04:06.877441 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 03:04:07.293203 sshd[6383]: Connection closed by 68.220.241.50 port 46622 Mar 6 03:04:07.294461 sshd-session[6380]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:07.299571 systemd-logind[1959]: Session 20 logged out. Waiting for processes to exit. Mar 6 03:04:07.300364 systemd[1]: sshd@20-172.31.19.55:22-68.220.241.50:46622.service: Deactivated successfully. Mar 6 03:04:07.303079 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 03:04:07.305575 systemd-logind[1959]: Removed session 20. Mar 6 03:04:12.391471 systemd[1]: Started sshd@21-172.31.19.55:22-68.220.241.50:55392.service - OpenSSH per-connection server daemon (68.220.241.50:55392). Mar 6 03:04:12.848881 sshd[6403]: Accepted publickey for core from 68.220.241.50 port 55392 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:12.850671 sshd-session[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:12.855707 systemd-logind[1959]: New session 21 of user core. Mar 6 03:04:12.863436 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 03:04:13.182470 sshd[6406]: Connection closed by 68.220.241.50 port 55392 Mar 6 03:04:13.184626 sshd-session[6403]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:13.189626 systemd[1]: sshd@21-172.31.19.55:22-68.220.241.50:55392.service: Deactivated successfully. Mar 6 03:04:13.189918 systemd-logind[1959]: Session 21 logged out. Waiting for processes to exit. Mar 6 03:04:13.192622 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 03:04:13.194743 systemd-logind[1959]: Removed session 21. Mar 6 03:04:18.273349 systemd[1]: Started sshd@22-172.31.19.55:22-68.220.241.50:55400.service - OpenSSH per-connection server daemon (68.220.241.50:55400). Mar 6 03:04:18.788415 sshd[6443]: Accepted publickey for core from 68.220.241.50 port 55400 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:18.790552 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:18.797083 systemd-logind[1959]: New session 22 of user core. Mar 6 03:04:18.804348 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 03:04:19.514676 sshd[6446]: Connection closed by 68.220.241.50 port 55400 Mar 6 03:04:19.516284 sshd-session[6443]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:19.521077 systemd[1]: sshd@22-172.31.19.55:22-68.220.241.50:55400.service: Deactivated successfully. Mar 6 03:04:19.524074 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 03:04:19.525548 systemd-logind[1959]: Session 22 logged out. Waiting for processes to exit. Mar 6 03:04:19.527518 systemd-logind[1959]: Removed session 22. Mar 6 03:04:24.614440 systemd[1]: Started sshd@23-172.31.19.55:22-68.220.241.50:39454.service - OpenSSH per-connection server daemon (68.220.241.50:39454). Mar 6 03:04:25.144561 sshd[6483]: Accepted publickey for core from 68.220.241.50 port 39454 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:25.147872 sshd-session[6483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:25.155861 systemd-logind[1959]: New session 23 of user core. Mar 6 03:04:25.162319 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 03:04:25.898175 sshd[6486]: Connection closed by 68.220.241.50 port 39454 Mar 6 03:04:25.900334 sshd-session[6483]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:25.905790 systemd-logind[1959]: Session 23 logged out. Waiting for processes to exit. Mar 6 03:04:25.907944 systemd[1]: sshd@23-172.31.19.55:22-68.220.241.50:39454.service: Deactivated successfully. Mar 6 03:04:25.912061 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 03:04:25.916721 systemd-logind[1959]: Removed session 23. Mar 6 03:04:30.993394 systemd[1]: Started sshd@24-172.31.19.55:22-68.220.241.50:39458.service - OpenSSH per-connection server daemon (68.220.241.50:39458). Mar 6 03:04:31.493881 sshd[6501]: Accepted publickey for core from 68.220.241.50 port 39458 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:31.495758 sshd-session[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:31.501208 systemd-logind[1959]: New session 24 of user core. Mar 6 03:04:31.509318 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 03:04:32.516379 sshd[6529]: Connection closed by 68.220.241.50 port 39458 Mar 6 03:04:32.518297 sshd-session[6501]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:32.522689 systemd[1]: sshd@24-172.31.19.55:22-68.220.241.50:39458.service: Deactivated successfully. Mar 6 03:04:32.526047 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 03:04:32.528055 systemd-logind[1959]: Session 24 logged out. Waiting for processes to exit. Mar 6 03:04:32.530628 systemd-logind[1959]: Removed session 24. Mar 6 03:04:37.613467 systemd[1]: Started sshd@25-172.31.19.55:22-68.220.241.50:54674.service - OpenSSH per-connection server daemon (68.220.241.50:54674). Mar 6 03:04:38.103333 sshd[6564]: Accepted publickey for core from 68.220.241.50 port 54674 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:38.105461 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:38.112632 systemd-logind[1959]: New session 25 of user core. Mar 6 03:04:38.118324 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 03:04:38.575941 sshd[6567]: Connection closed by 68.220.241.50 port 54674 Mar 6 03:04:38.577375 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:38.582221 systemd-logind[1959]: Session 25 logged out. Waiting for processes to exit. Mar 6 03:04:38.583192 systemd[1]: sshd@25-172.31.19.55:22-68.220.241.50:54674.service: Deactivated successfully. Mar 6 03:04:38.585743 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 03:04:38.588131 systemd-logind[1959]: Removed session 25. Mar 6 03:04:53.858853 kubelet[3516]: E0306 03:04:53.858770 3516 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 6 03:04:54.332807 systemd[1]: cri-containerd-e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5.scope: Deactivated successfully. Mar 6 03:04:54.333202 systemd[1]: cri-containerd-e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5.scope: Consumed 4.548s CPU time, 85.7M memory peak, 83.8M read from disk. Mar 6 03:04:54.649868 systemd[1]: cri-containerd-e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc.scope: Deactivated successfully. Mar 6 03:04:54.651329 systemd[1]: cri-containerd-e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc.scope: Consumed 6.876s CPU time, 125.2M memory peak, 62.7M read from disk. Mar 6 03:04:54.657609 containerd[1974]: time="2026-03-06T03:04:54.634563003Z" level=info msg="received container exit event container_id:\"e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5\" id:\"e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5\" pid:3167 exit_status:1 exited_at:{seconds:1772766294 nanos:416925480}" Mar 6 03:04:54.664475 containerd[1974]: time="2026-03-06T03:04:54.664427855Z" level=info msg="received container exit event container_id:\"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\" id:\"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\" pid:3842 exit_status:1 exited_at:{seconds:1772766294 nanos:660443497}" Mar 6 03:04:54.825802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc-rootfs.mount: Deactivated successfully. Mar 6 03:04:54.826241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5-rootfs.mount: Deactivated successfully. Mar 6 03:04:55.063378 kubelet[3516]: I0306 03:04:55.062896 3516 scope.go:117] "RemoveContainer" containerID="e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc" Mar 6 03:04:55.064230 kubelet[3516]: I0306 03:04:55.063458 3516 scope.go:117] "RemoveContainer" containerID="e57686e9eb9ee93ec79ad073b5a8a405917fc9011dbb41d698643bd5289872f5" Mar 6 03:04:55.129279 containerd[1974]: time="2026-03-06T03:04:55.129228576Z" level=info msg="CreateContainer within sandbox \"994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 6 03:04:55.129648 containerd[1974]: time="2026-03-06T03:04:55.129228984Z" level=info msg="CreateContainer within sandbox \"dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 6 03:04:55.342191 containerd[1974]: time="2026-03-06T03:04:55.338387053Z" level=info msg="Container 1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:55.345071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275792842.mount: Deactivated successfully. Mar 6 03:04:55.350624 containerd[1974]: time="2026-03-06T03:04:55.347477659Z" level=info msg="Container ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:55.377061 containerd[1974]: time="2026-03-06T03:04:55.377013249Z" level=info msg="CreateContainer within sandbox \"dc447761e082af1b70918c38c1020add763f1e8efb6546d8b0e8db7a7853cda1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87\"" Mar 6 03:04:55.377739 containerd[1974]: time="2026-03-06T03:04:55.377705333Z" level=info msg="StartContainer for \"ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87\"" Mar 6 03:04:55.378633 containerd[1974]: time="2026-03-06T03:04:55.378598328Z" level=info msg="CreateContainer within sandbox \"994c050aa71b9144c51edd7a98bc3fbec79b4b8c59314e37fda1f8e177f273da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a\"" Mar 6 03:04:55.379668 containerd[1974]: time="2026-03-06T03:04:55.379636173Z" level=info msg="StartContainer for \"1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a\"" Mar 6 03:04:55.388226 containerd[1974]: time="2026-03-06T03:04:55.388175112Z" level=info msg="connecting to shim ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87" address="unix:///run/containerd/s/6d734ab45c3e833e3b88a265ae6ecff0d9f5330835e62fc78d64a890266bdac5" protocol=ttrpc version=3 Mar 6 03:04:55.388536 containerd[1974]: time="2026-03-06T03:04:55.388176078Z" level=info msg="connecting to shim 1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a" address="unix:///run/containerd/s/58378d68ab07f5b621e250c28790c4122e0f050cd4f5f5ff258caec19af80f02" protocol=ttrpc version=3 Mar 6 03:04:55.470333 systemd[1]: Started cri-containerd-1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a.scope - libcontainer container 1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a. Mar 6 03:04:55.482516 systemd[1]: Started cri-containerd-ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87.scope - libcontainer container ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87. Mar 6 03:04:55.557232 containerd[1974]: time="2026-03-06T03:04:55.556503501Z" level=info msg="StartContainer for \"ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87\" returns successfully" Mar 6 03:04:55.602398 containerd[1974]: time="2026-03-06T03:04:55.601865009Z" level=info msg="StartContainer for \"1b70d98b4d5b9361096ef9b0011565ed911bc36f75f7f9aa4206bb9c599d1b4a\" returns successfully" Mar 6 03:04:59.476296 systemd[1]: cri-containerd-ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8.scope: Deactivated successfully. Mar 6 03:04:59.478247 systemd[1]: cri-containerd-ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8.scope: Consumed 2.438s CPU time, 35.9M memory peak, 49.5M read from disk. Mar 6 03:04:59.481658 containerd[1974]: time="2026-03-06T03:04:59.478089954Z" level=info msg="received container exit event container_id:\"ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8\" id:\"ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8\" pid:3174 exit_status:1 exited_at:{seconds:1772766299 nanos:477699369}" Mar 6 03:04:59.531145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8-rootfs.mount: Deactivated successfully. Mar 6 03:05:00.107269 kubelet[3516]: I0306 03:05:00.107230 3516 scope.go:117] "RemoveContainer" containerID="ddecd7b897994eb1ce693fc12777ff8231396763ae8b5f1c3998f2a70441cdc8" Mar 6 03:05:00.123191 containerd[1974]: time="2026-03-06T03:05:00.122136342Z" level=info msg="CreateContainer within sandbox \"1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 6 03:05:00.153127 containerd[1974]: time="2026-03-06T03:05:00.152289166Z" level=info msg="Container cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:05:00.169758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654188886.mount: Deactivated successfully. Mar 6 03:05:00.208469 containerd[1974]: time="2026-03-06T03:05:00.207329527Z" level=info msg="CreateContainer within sandbox \"1ea50e87c49ae9d0d112c8e0037e13c4ca84048e880fe35e87464f208049305b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349\"" Mar 6 03:05:00.209426 containerd[1974]: time="2026-03-06T03:05:00.209360250Z" level=info msg="StartContainer for \"cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349\"" Mar 6 03:05:00.214335 containerd[1974]: time="2026-03-06T03:05:00.214286935Z" level=info msg="connecting to shim cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349" address="unix:///run/containerd/s/bbb56e0d9a0f2ba25d419ad16821b5cb3eb4141507f3583ce6cb7f8425ef48e3" protocol=ttrpc version=3 Mar 6 03:05:00.243371 systemd[1]: Started cri-containerd-cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349.scope - libcontainer container cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349. Mar 6 03:05:00.358552 containerd[1974]: time="2026-03-06T03:05:00.358070345Z" level=info msg="StartContainer for \"cedb4ea7a1816e784145f4bc30605789fd43c04b18d62dcf85d85592ad399349\" returns successfully" Mar 6 03:05:03.859673 kubelet[3516]: E0306 03:05:03.859501 3516 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 6 03:05:09.651675 systemd[1]: cri-containerd-ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87.scope: Deactivated successfully. Mar 6 03:05:09.652072 systemd[1]: cri-containerd-ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87.scope: Consumed 409ms CPU time, 85.2M memory peak, 49.8M read from disk. Mar 6 03:05:09.653708 containerd[1974]: time="2026-03-06T03:05:09.653639676Z" level=info msg="received container exit event container_id:\"ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87\" id:\"ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87\" pid:6731 exit_status:1 exited_at:{seconds:1772766309 nanos:653328517}" Mar 6 03:05:09.682244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87-rootfs.mount: Deactivated successfully. Mar 6 03:05:10.171085 kubelet[3516]: I0306 03:05:10.170955 3516 scope.go:117] "RemoveContainer" containerID="e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc" Mar 6 03:05:10.172549 kubelet[3516]: I0306 03:05:10.171257 3516 scope.go:117] "RemoveContainer" containerID="ee2e267b742d0dbb4dbd6533082a970fe9fa0b72543823eb36e7ecb7bb4edb87" Mar 6 03:05:10.175135 kubelet[3516]: E0306 03:05:10.174261 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-pwn58_tigera-operator(5bf10edb-fa26-401a-b3a9-f4fc5d38aadd)\"" pod="tigera-operator/tigera-operator-5588576f44-pwn58" podUID="5bf10edb-fa26-401a-b3a9-f4fc5d38aadd" Mar 6 03:05:10.217041 containerd[1974]: time="2026-03-06T03:05:10.216975491Z" level=info msg="RemoveContainer for \"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\"" Mar 6 03:05:10.232967 containerd[1974]: time="2026-03-06T03:05:10.232908706Z" level=info msg="RemoveContainer for \"e21d2735de35946808258c2572b0bc59f0c4e577fff268ce661169a6b6cbb1fc\" returns successfully" Mar 6 03:05:13.865534 kubelet[3516]: E0306 03:05:13.865469 3516 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-55?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"