Mar 14 00:15:59.951006 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:15:59.951051 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:15:59.951071 kernel: BIOS-provided physical RAM map: Mar 14 00:15:59.951083 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:15:59.951094 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 14 00:15:59.951106 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 14 00:15:59.951120 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 14 00:15:59.951133 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 14 00:15:59.951145 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 14 00:15:59.951161 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 14 00:15:59.951174 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 14 00:15:59.951186 kernel: NX (Execute Disable) protection: active Mar 14 00:15:59.951199 kernel: APIC: Static calls initialized Mar 14 00:15:59.951213 kernel: efi: EFI v2.7 by EDK II Mar 14 00:15:59.951229 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 14 00:15:59.951248 kernel: SMBIOS 2.7 present. Mar 14 00:15:59.951262 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 14 00:15:59.951276 kernel: Hypervisor detected: KVM Mar 14 00:15:59.951291 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:15:59.951305 kernel: kvm-clock: using sched offset of 4357259338 cycles Mar 14 00:15:59.951321 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:15:59.951336 kernel: tsc: Detected 2499.998 MHz processor Mar 14 00:15:59.951351 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:15:59.951366 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:15:59.951382 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 14 00:15:59.951401 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:15:59.951415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:15:59.951430 kernel: Using GB pages for direct mapping Mar 14 00:15:59.951446 kernel: Secure boot disabled Mar 14 00:15:59.951460 kernel: ACPI: Early table checksum verification disabled Mar 14 00:15:59.951475 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 14 00:15:59.951490 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:15:59.951506 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:15:59.951521 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:15:59.951540 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 14 00:15:59.951554 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 14 00:15:59.951569 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:15:59.951584 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:15:59.951599 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 14 00:15:59.951614 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 14 00:15:59.951636 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:15:59.951656 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:15:59.951672 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 14 00:15:59.951688 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 14 00:15:59.951704 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 14 00:15:59.951720 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 14 00:15:59.951735 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 14 00:15:59.951755 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 14 00:15:59.951771 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 14 00:15:59.951787 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 14 00:15:59.951802 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 14 00:15:59.951818 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 14 00:15:59.951834 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 14 00:15:59.951850 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 14 00:15:59.951866 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:15:59.951903 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:15:59.951917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 14 00:15:59.951935 kernel: NUMA: Initialized distance table, cnt=1 Mar 14 00:15:59.951950 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 14 00:15:59.951965 kernel: Zone ranges: Mar 14 00:15:59.951980 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:15:59.951995 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 14 00:15:59.952010 kernel: Normal empty Mar 14 00:15:59.952025 kernel: Movable zone start for each node Mar 14 00:15:59.952040 kernel: Early memory node ranges Mar 14 00:15:59.952055 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:15:59.952084 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 14 00:15:59.952099 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 14 00:15:59.952114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 14 00:15:59.952129 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:15:59.952144 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:15:59.952159 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 14 00:15:59.952175 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 14 00:15:59.952190 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 14 00:15:59.952205 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:15:59.952221 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 14 00:15:59.952239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:15:59.952254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:15:59.952269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:15:59.952284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:15:59.952300 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:15:59.952314 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:15:59.952329 kernel: TSC deadline timer available Mar 14 00:15:59.952344 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:15:59.952359 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:15:59.952378 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 14 00:15:59.952392 kernel: Booting paravirtualized kernel on KVM Mar 14 00:15:59.952408 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:15:59.952423 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:15:59.952438 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:15:59.952453 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:15:59.952468 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:15:59.952482 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:15:59.952497 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:15:59.952517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:15:59.952532 kernel: random: crng init done Mar 14 00:15:59.952547 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:15:59.952563 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:15:59.952578 kernel: Fallback order for Node 0: 0 Mar 14 00:15:59.952593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 14 00:15:59.952608 kernel: Policy zone: DMA32 Mar 14 00:15:59.952623 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:15:59.952642 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 14 00:15:59.952658 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:15:59.952673 kernel: Kernel/User page tables isolation: enabled Mar 14 00:15:59.952688 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:15:59.952703 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:15:59.952718 kernel: Dynamic Preempt: voluntary Mar 14 00:15:59.952733 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:15:59.952749 kernel: rcu: RCU event tracing is enabled. Mar 14 00:15:59.952765 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:15:59.952783 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:15:59.952798 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:15:59.952813 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:15:59.952828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:15:59.952844 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:15:59.954933 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:15:59.954960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:15:59.954997 kernel: Console: colour dummy device 80x25 Mar 14 00:15:59.955015 kernel: printk: console [tty0] enabled Mar 14 00:15:59.955032 kernel: printk: console [ttyS0] enabled Mar 14 00:15:59.955049 kernel: ACPI: Core revision 20230628 Mar 14 00:15:59.955067 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 14 00:15:59.955087 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:15:59.955105 kernel: x2apic enabled Mar 14 00:15:59.955122 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:15:59.955140 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 14 00:15:59.955158 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 14 00:15:59.955179 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 14 00:15:59.955196 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 14 00:15:59.955213 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:15:59.955230 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:15:59.955247 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:15:59.955264 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 14 00:15:59.955279 kernel: RETBleed: Vulnerable Mar 14 00:15:59.955293 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:15:59.955310 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:15:59.955324 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:15:59.955343 kernel: GDS: Unknown: Dependent on hypervisor status Mar 14 00:15:59.955357 kernel: active return thunk: its_return_thunk Mar 14 00:15:59.955372 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:15:59.955387 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:15:59.955402 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:15:59.955419 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:15:59.955435 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 14 00:15:59.955450 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 14 00:15:59.955464 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:15:59.955479 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:15:59.955495 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:15:59.955513 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:15:59.955527 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:15:59.955543 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 14 00:15:59.955560 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 14 00:15:59.955576 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 14 00:15:59.955593 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 14 00:15:59.955610 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 14 00:15:59.955626 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 14 00:15:59.955643 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 14 00:15:59.955658 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:15:59.955672 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:15:59.955690 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:15:59.955704 kernel: landlock: Up and running. Mar 14 00:15:59.955718 kernel: SELinux: Initializing. Mar 14 00:15:59.955733 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:15:59.955747 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:15:59.955762 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 14 00:15:59.955777 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:15:59.955793 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:15:59.955808 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:15:59.955824 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:15:59.955842 kernel: signal: max sigframe size: 3632 Mar 14 00:15:59.955857 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:15:59.955872 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:15:59.955903 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:15:59.955916 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:15:59.955930 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:15:59.955943 kernel: .... node #0, CPUs: #1 Mar 14 00:15:59.955958 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 14 00:15:59.955973 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:15:59.955991 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:15:59.956005 kernel: smpboot: Max logical packages: 1 Mar 14 00:15:59.956018 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 14 00:15:59.956031 kernel: devtmpfs: initialized Mar 14 00:15:59.956045 kernel: x86/mm: Memory block size: 128MB Mar 14 00:15:59.956060 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 14 00:15:59.956086 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:15:59.956099 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:15:59.956112 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:15:59.956130 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:15:59.956144 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:15:59.956160 kernel: audit: type=2000 audit(1773447359.906:1): state=initialized audit_enabled=0 res=1 Mar 14 00:15:59.956176 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:15:59.956190 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:15:59.956205 kernel: cpuidle: using governor menu Mar 14 00:15:59.956221 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:15:59.956237 kernel: dca service started, version 1.12.1 Mar 14 00:15:59.956254 kernel: PCI: Using configuration type 1 for base access Mar 14 00:15:59.956274 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:15:59.956291 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:15:59.956308 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:15:59.956324 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:15:59.956341 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:15:59.956358 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:15:59.956375 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:15:59.956391 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:15:59.956408 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 14 00:15:59.956427 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:15:59.956444 kernel: ACPI: Interpreter enabled Mar 14 00:15:59.956461 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:15:59.956477 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:15:59.956494 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:15:59.956511 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:15:59.956528 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 14 00:15:59.956545 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:15:59.956788 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:15:59.958003 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 14 00:15:59.958162 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 14 00:15:59.958185 kernel: acpiphp: Slot [3] registered Mar 14 00:15:59.958201 kernel: acpiphp: Slot [4] registered Mar 14 00:15:59.958218 kernel: acpiphp: Slot [5] registered Mar 14 00:15:59.958233 kernel: acpiphp: Slot [6] registered Mar 14 00:15:59.958249 kernel: acpiphp: Slot [7] registered Mar 14 00:15:59.958271 kernel: acpiphp: Slot [8] registered Mar 14 00:15:59.958286 kernel: acpiphp: Slot [9] registered Mar 14 00:15:59.958302 kernel: acpiphp: Slot [10] registered Mar 14 00:15:59.958318 kernel: acpiphp: Slot [11] registered Mar 14 00:15:59.958334 kernel: acpiphp: Slot [12] registered Mar 14 00:15:59.958350 kernel: acpiphp: Slot [13] registered Mar 14 00:15:59.958366 kernel: acpiphp: Slot [14] registered Mar 14 00:15:59.958382 kernel: acpiphp: Slot [15] registered Mar 14 00:15:59.958397 kernel: acpiphp: Slot [16] registered Mar 14 00:15:59.958413 kernel: acpiphp: Slot [17] registered Mar 14 00:15:59.958432 kernel: acpiphp: Slot [18] registered Mar 14 00:15:59.958447 kernel: acpiphp: Slot [19] registered Mar 14 00:15:59.958463 kernel: acpiphp: Slot [20] registered Mar 14 00:15:59.958479 kernel: acpiphp: Slot [21] registered Mar 14 00:15:59.958495 kernel: acpiphp: Slot [22] registered Mar 14 00:15:59.958510 kernel: acpiphp: Slot [23] registered Mar 14 00:15:59.958526 kernel: acpiphp: Slot [24] registered Mar 14 00:15:59.958542 kernel: acpiphp: Slot [25] registered Mar 14 00:15:59.958558 kernel: acpiphp: Slot [26] registered Mar 14 00:15:59.958577 kernel: acpiphp: Slot [27] registered Mar 14 00:15:59.958592 kernel: acpiphp: Slot [28] registered Mar 14 00:15:59.958608 kernel: acpiphp: Slot [29] registered Mar 14 00:15:59.958624 kernel: acpiphp: Slot [30] registered Mar 14 00:15:59.958641 kernel: acpiphp: Slot [31] registered Mar 14 00:15:59.958657 kernel: PCI host bridge to bus 0000:00 Mar 14 00:15:59.958823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:15:59.958982 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:15:59.959116 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:15:59.959242 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 14 00:15:59.959361 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 14 00:15:59.959479 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:15:59.959631 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 14 00:15:59.959773 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 14 00:15:59.961075 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 14 00:15:59.961228 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 14 00:15:59.961361 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 14 00:15:59.961492 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 14 00:15:59.961623 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 14 00:15:59.962821 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 14 00:15:59.963033 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 14 00:15:59.963205 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 14 00:15:59.963353 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 11718 usecs Mar 14 00:15:59.963511 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 14 00:15:59.963657 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 14 00:15:59.963797 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:15:59.964643 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 14 00:15:59.964799 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:15:59.965044 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:15:59.965195 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 14 00:15:59.965344 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:15:59.965486 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 14 00:15:59.965508 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:15:59.965526 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:15:59.965543 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:15:59.965560 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:15:59.965581 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 14 00:15:59.965597 kernel: iommu: Default domain type: Translated Mar 14 00:15:59.965614 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:15:59.965631 kernel: efivars: Registered efivars operations Mar 14 00:15:59.965647 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:15:59.965664 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:15:59.965681 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 14 00:15:59.965697 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 14 00:15:59.965836 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 14 00:15:59.966148 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 14 00:15:59.966287 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:15:59.966307 kernel: vgaarb: loaded Mar 14 00:15:59.966322 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 14 00:15:59.966338 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 14 00:15:59.966352 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:15:59.966368 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:15:59.966383 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:15:59.966403 kernel: pnp: PnP ACPI init Mar 14 00:15:59.966418 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:15:59.966433 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:15:59.966448 kernel: NET: Registered PF_INET protocol family Mar 14 00:15:59.966463 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:15:59.966479 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 14 00:15:59.966495 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:15:59.966510 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:15:59.966526 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 14 00:15:59.966544 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 14 00:15:59.966560 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:15:59.966575 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:15:59.966591 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:15:59.966606 kernel: NET: Registered PF_XDP protocol family Mar 14 00:15:59.966737 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:15:59.966858 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:15:59.967051 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:15:59.967170 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 14 00:15:59.967315 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 14 00:15:59.967478 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 14 00:15:59.967498 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:15:59.967514 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:15:59.967530 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 14 00:15:59.967546 kernel: clocksource: Switched to clocksource tsc Mar 14 00:15:59.967561 kernel: Initialise system trusted keyrings Mar 14 00:15:59.967577 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 14 00:15:59.967596 kernel: Key type asymmetric registered Mar 14 00:15:59.967611 kernel: Asymmetric key parser 'x509' registered Mar 14 00:15:59.967625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:15:59.967639 kernel: io scheduler mq-deadline registered Mar 14 00:15:59.967655 kernel: io scheduler kyber registered Mar 14 00:15:59.967670 kernel: io scheduler bfq registered Mar 14 00:15:59.967685 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:15:59.967700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:15:59.967716 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:15:59.967735 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:15:59.967750 kernel: i8042: Warning: Keylock active Mar 14 00:15:59.967765 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:15:59.967781 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:15:59.967959 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 14 00:15:59.968099 kernel: rtc_cmos 00:00: registered as rtc0 Mar 14 00:15:59.968224 kernel: rtc_cmos 00:00: setting system clock to 2026-03-14T00:15:59 UTC (1773447359) Mar 14 00:15:59.968351 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 14 00:15:59.968375 kernel: intel_pstate: CPU model not supported Mar 14 00:15:59.968391 kernel: efifb: probing for efifb Mar 14 00:15:59.968409 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 14 00:15:59.968426 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 14 00:15:59.968443 kernel: efifb: scrolling: redraw Mar 14 00:15:59.968460 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:15:59.968477 kernel: Console: switching to colour frame buffer device 100x37 Mar 14 00:15:59.968493 kernel: fb0: EFI VGA frame buffer device Mar 14 00:15:59.968510 kernel: pstore: Using crash dump compression: deflate Mar 14 00:15:59.968530 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:15:59.968546 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:15:59.968563 kernel: Segment Routing with IPv6 Mar 14 00:15:59.968579 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:15:59.968597 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:15:59.968614 kernel: Key type dns_resolver registered Mar 14 00:15:59.968655 kernel: IPI shorthand broadcast: enabled Mar 14 00:15:59.968675 kernel: sched_clock: Marking stable (555001756, 132458519)->(823906714, -136446439) Mar 14 00:15:59.968693 kernel: registered taskstats version 1 Mar 14 00:15:59.968713 kernel: Loading compiled-in X.509 certificates Mar 14 00:15:59.968730 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:15:59.968747 kernel: Key type .fscrypt registered Mar 14 00:15:59.968764 kernel: Key type fscrypt-provisioning registered Mar 14 00:15:59.968781 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:15:59.968797 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:15:59.968818 kernel: ima: No architecture policies found Mar 14 00:15:59.968835 kernel: clk: Disabling unused clocks Mar 14 00:15:59.968853 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:15:59.968873 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:15:59.968955 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:15:59.968972 kernel: Run /init as init process Mar 14 00:15:59.968987 kernel: with arguments: Mar 14 00:15:59.969002 kernel: /init Mar 14 00:15:59.969017 kernel: with environment: Mar 14 00:15:59.969032 kernel: HOME=/ Mar 14 00:15:59.969047 kernel: TERM=linux Mar 14 00:15:59.969066 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:15:59.969090 systemd[1]: Detected virtualization amazon. Mar 14 00:15:59.969107 systemd[1]: Detected architecture x86-64. Mar 14 00:15:59.969124 systemd[1]: Running in initrd. Mar 14 00:15:59.969140 systemd[1]: No hostname configured, using default hostname. Mar 14 00:15:59.969156 systemd[1]: Hostname set to . Mar 14 00:15:59.969174 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:15:59.969191 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:15:59.969211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:15:59.969228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:15:59.969246 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:15:59.969264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:15:59.969284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:15:59.969305 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:15:59.969323 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:15:59.969340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:15:59.969358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:15:59.969377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:15:59.969394 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:15:59.969412 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:15:59.969433 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:15:59.969450 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:15:59.969466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:15:59.969484 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:15:59.969501 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:15:59.969517 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:15:59.969534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:15:59.969551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:15:59.969568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:15:59.969590 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:15:59.969607 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:15:59.969623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:15:59.969639 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:15:59.969657 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:15:59.969674 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:15:59.969690 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:15:59.969707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:59.969729 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:15:59.969747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:15:59.969796 systemd-journald[179]: Collecting audit messages is disabled. Mar 14 00:15:59.969835 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:15:59.969858 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:15:59.969892 systemd-journald[179]: Journal started Mar 14 00:15:59.969937 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2db0c805e6a600923848cc62d5c2cb) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:15:59.956032 systemd-modules-load[180]: Inserted module 'overlay' Mar 14 00:15:59.975533 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:15:59.979893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:59.992700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:15:59.997142 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:16:00.017633 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:16:00.017681 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:16:00.017704 kernel: Bridge firewalling registered Mar 14 00:15:59.998105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:16:00.010156 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 14 00:16:00.025174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:16:00.027140 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:00.035180 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:16:00.040114 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:16:00.047292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:16:00.050714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:16:00.065944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:00.068863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:16:00.073090 dracut-cmdline[205]: dracut-dracut-053 Mar 14 00:16:00.074990 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:16:00.079504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:16:00.130176 systemd-resolved[226]: Positive Trust Anchors: Mar 14 00:16:00.130197 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:16:00.130260 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:16:00.138606 systemd-resolved[226]: Defaulting to hostname 'linux'. Mar 14 00:16:00.142241 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:16:00.142996 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:16:00.171923 kernel: SCSI subsystem initialized Mar 14 00:16:00.203911 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:16:00.215908 kernel: iscsi: registered transport (tcp) Mar 14 00:16:00.240116 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:16:00.240297 kernel: QLogic iSCSI HBA Driver Mar 14 00:16:00.292637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:16:00.302146 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:16:00.361770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:16:00.361852 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:16:00.372258 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:16:00.481793 kernel: raid6: avx512x4 gen() 1218 MB/s Mar 14 00:16:00.499936 kernel: raid6: avx512x2 gen() 279 MB/s Mar 14 00:16:00.517935 kernel: raid6: avx512x1 gen() 7848 MB/s Mar 14 00:16:00.539925 kernel: raid6: avx2x4 gen() 1902 MB/s Mar 14 00:16:00.559929 kernel: raid6: avx2x2 gen() 5059 MB/s Mar 14 00:16:00.590072 kernel: raid6: avx2x1 gen() 1732 MB/s Mar 14 00:16:00.590150 kernel: raid6: using algorithm avx512x1 gen() 7848 MB/s Mar 14 00:16:00.614946 kernel: raid6: .... xor() 11400 MB/s, rmw enabled Mar 14 00:16:00.615023 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:16:00.701915 kernel: xor: automatically using best checksumming function avx Mar 14 00:16:01.226946 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:16:01.311228 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:16:01.318186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:16:01.334430 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 14 00:16:01.339583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:16:01.348120 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:16:01.377725 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Mar 14 00:16:01.414427 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:16:01.422136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:16:01.477646 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:16:01.484190 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:16:01.516160 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:16:01.518640 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:16:01.520992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:16:01.522315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:16:01.529158 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:16:01.561541 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:16:01.585901 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:16:01.603474 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:16:01.603806 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:16:01.611868 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:16:01.611956 kernel: AES CTR mode by8 optimization enabled Mar 14 00:16:01.613346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:16:01.613531 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:01.626742 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 14 00:16:01.627044 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:77:97:e4:55:51 Mar 14 00:16:01.617873 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:16:01.618512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:01.618730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:01.619399 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:01.634637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:01.636219 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:16:01.649030 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:16:01.650029 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 14 00:16:01.656362 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:01.657413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:01.671621 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:16:01.676180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:01.685323 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:16:01.685391 kernel: GPT:9289727 != 33554431 Mar 14 00:16:01.685420 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:16:01.685440 kernel: GPT:9289727 != 33554431 Mar 14 00:16:01.685458 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:16:01.685477 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:16:01.700952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:01.706239 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:16:01.737830 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:01.760288 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (460) Mar 14 00:16:01.776918 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (458) Mar 14 00:16:01.791547 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:16:01.849257 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:16:01.860915 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:16:01.867007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:16:01.867636 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:16:01.884488 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:16:01.892500 disk-uuid[633]: Primary Header is updated. Mar 14 00:16:01.892500 disk-uuid[633]: Secondary Entries is updated. Mar 14 00:16:01.892500 disk-uuid[633]: Secondary Header is updated. Mar 14 00:16:01.904919 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:16:01.911908 kernel: GPT:disk_guids don't match. Mar 14 00:16:01.911988 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:16:01.912009 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:16:01.920948 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:16:02.923581 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:16:02.924477 disk-uuid[634]: The operation has completed successfully. Mar 14 00:16:03.064490 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:16:03.064619 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:16:03.087146 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:16:03.092778 sh[977]: Success Mar 14 00:16:03.116915 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:16:03.225606 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:16:03.234013 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:16:03.236715 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:16:03.270904 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:16:03.270980 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:03.271002 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:16:03.274357 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:16:03.274438 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:16:03.298999 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:16:03.315803 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:16:03.317265 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:16:03.323086 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:16:03.326086 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:16:03.352912 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:03.352993 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:03.353026 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:16:03.368937 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:16:03.385084 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:03.384991 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:16:03.393681 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:16:03.404345 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:16:03.442097 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:16:03.447126 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:16:03.479821 systemd-networkd[1169]: lo: Link UP Mar 14 00:16:03.479834 systemd-networkd[1169]: lo: Gained carrier Mar 14 00:16:03.481768 systemd-networkd[1169]: Enumeration completed Mar 14 00:16:03.482018 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:16:03.482806 systemd[1]: Reached target network.target - Network. Mar 14 00:16:03.485142 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:03.485147 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:16:03.492203 systemd-networkd[1169]: eth0: Link UP Mar 14 00:16:03.492652 systemd-networkd[1169]: eth0: Gained carrier Mar 14 00:16:03.492670 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:03.515005 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.23.179/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:16:03.746583 ignition[1107]: Ignition 2.19.0 Mar 14 00:16:03.746598 ignition[1107]: Stage: fetch-offline Mar 14 00:16:03.746849 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:03.746864 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:03.747332 ignition[1107]: Ignition finished successfully Mar 14 00:16:03.749355 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:16:03.754112 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:16:03.771191 ignition[1178]: Ignition 2.19.0 Mar 14 00:16:03.771202 ignition[1178]: Stage: fetch Mar 14 00:16:03.771534 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:03.771548 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:03.773069 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:03.781696 ignition[1178]: PUT result: OK Mar 14 00:16:03.783977 ignition[1178]: parsed url from cmdline: "" Mar 14 00:16:03.783988 ignition[1178]: no config URL provided Mar 14 00:16:03.783998 ignition[1178]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:16:03.784014 ignition[1178]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:16:03.784249 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:03.785067 ignition[1178]: PUT result: OK Mar 14 00:16:03.785118 ignition[1178]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:16:03.786005 ignition[1178]: GET result: OK Mar 14 00:16:03.786128 ignition[1178]: parsing config with SHA512: 40965482aec75c9c959e714330a3d76b87bd6031d4c068bd6f7976ffb3cd1573a4647454fbf505458acdeede9106c548e226aefda9b308c924b4e986be95bc54 Mar 14 00:16:03.791391 unknown[1178]: fetched base config from "system" Mar 14 00:16:03.791920 unknown[1178]: fetched base config from "system" Mar 14 00:16:03.791929 unknown[1178]: fetched user config from "aws" Mar 14 00:16:03.792828 ignition[1178]: fetch: fetch complete Mar 14 00:16:03.792836 ignition[1178]: fetch: fetch passed Mar 14 00:16:03.792911 ignition[1178]: Ignition finished successfully Mar 14 00:16:03.795500 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:16:03.800340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:16:03.817460 ignition[1184]: Ignition 2.19.0 Mar 14 00:16:03.817475 ignition[1184]: Stage: kargs Mar 14 00:16:03.817954 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:03.817970 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:03.818091 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:03.819250 ignition[1184]: PUT result: OK Mar 14 00:16:03.822920 ignition[1184]: kargs: kargs passed Mar 14 00:16:03.823000 ignition[1184]: Ignition finished successfully Mar 14 00:16:03.824548 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:16:03.833173 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:16:03.848438 ignition[1190]: Ignition 2.19.0 Mar 14 00:16:03.848451 ignition[1190]: Stage: disks Mar 14 00:16:03.848946 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:03.848960 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:03.849084 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:03.850306 ignition[1190]: PUT result: OK Mar 14 00:16:03.858807 ignition[1190]: disks: disks passed Mar 14 00:16:03.858970 ignition[1190]: Ignition finished successfully Mar 14 00:16:03.860839 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:16:03.862179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:16:03.862998 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:16:03.863375 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:16:03.864029 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:16:03.864900 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:16:03.870110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:16:03.909985 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:16:03.913619 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:16:03.919028 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:16:04.032905 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:16:04.033256 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:16:04.034455 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:16:04.047031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:16:04.052024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:16:04.053305 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:16:04.053377 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:16:04.053413 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:16:04.066554 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:16:04.070926 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1217) Mar 14 00:16:04.075149 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:16:04.080226 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:04.080255 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:04.080267 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:16:04.091899 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:16:04.093865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:16:04.433960 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:16:04.439803 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:16:04.445297 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:16:04.450643 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:16:04.617160 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:16:04.622012 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:16:04.625059 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:16:04.635235 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:16:04.637679 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:04.664722 ignition[1330]: INFO : Ignition 2.19.0 Mar 14 00:16:04.666488 ignition[1330]: INFO : Stage: mount Mar 14 00:16:04.667902 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:04.667902 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:04.669624 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:04.670794 ignition[1330]: INFO : PUT result: OK Mar 14 00:16:04.674595 ignition[1330]: INFO : mount: mount passed Mar 14 00:16:04.675673 ignition[1330]: INFO : Ignition finished successfully Mar 14 00:16:04.679115 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:16:04.679814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:16:04.686089 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:16:04.694818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:16:04.719904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1342) Mar 14 00:16:04.724203 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:04.724283 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:04.724304 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:16:04.730913 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:16:04.733256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:16:04.755190 ignition[1359]: INFO : Ignition 2.19.0 Mar 14 00:16:04.755924 ignition[1359]: INFO : Stage: files Mar 14 00:16:04.756649 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:04.756649 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:04.756649 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:04.758058 ignition[1359]: INFO : PUT result: OK Mar 14 00:16:04.760536 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:16:04.761297 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:16:04.761297 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:16:04.784313 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:16:04.785378 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:16:04.785378 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:16:04.785236 unknown[1359]: wrote ssh authorized keys file for user: core Mar 14 00:16:04.787931 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:16:04.787931 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:16:04.902794 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:16:05.103044 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:16:05.103044 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:16:05.105725 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:16:05.430076 systemd-networkd[1169]: eth0: Gained IPv6LL Mar 14 00:16:05.635622 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:16:07.813085 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:16:07.813085 ignition[1359]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:16:07.815665 ignition[1359]: INFO : files: files passed Mar 14 00:16:07.815665 ignition[1359]: INFO : Ignition finished successfully Mar 14 00:16:07.817232 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:16:07.824138 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:16:07.829113 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:16:07.832149 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:16:07.832976 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:16:07.854102 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:16:07.854102 initrd-setup-root-after-ignition[1387]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:16:07.857623 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:16:07.859269 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:16:07.860606 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:16:07.872176 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:16:07.896912 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:16:07.897023 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:16:07.898171 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:16:07.898989 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:16:07.900291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:16:07.905126 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:16:07.919672 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:16:07.926065 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:16:07.937663 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:16:07.938396 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:16:07.939370 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:16:07.940340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:16:07.940519 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:16:07.941697 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:16:07.942552 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:16:07.943347 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:16:07.944189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:16:07.944999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:16:07.945767 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:16:07.946543 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:16:07.947333 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:16:07.948571 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:16:07.949328 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:16:07.950047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:16:07.950224 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:16:07.951310 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:16:07.952157 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:16:07.952864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:16:07.953025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:16:07.953713 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:16:07.953946 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:16:07.955230 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:16:07.955407 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:16:07.956191 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:16:07.956339 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:16:07.969209 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:16:07.973225 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:16:07.973844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:16:07.974119 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:16:07.977140 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:16:07.977370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:16:07.991759 ignition[1412]: INFO : Ignition 2.19.0 Mar 14 00:16:07.994863 ignition[1412]: INFO : Stage: umount Mar 14 00:16:07.994863 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:07.994863 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:16:07.994863 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:16:07.992680 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:16:07.998334 ignition[1412]: INFO : PUT result: OK Mar 14 00:16:07.992817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:16:08.002911 ignition[1412]: INFO : umount: umount passed Mar 14 00:16:08.002911 ignition[1412]: INFO : Ignition finished successfully Mar 14 00:16:08.002874 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:16:08.004833 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:16:08.005629 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:16:08.005715 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:16:08.007289 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:16:08.007352 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:16:08.008022 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:16:08.008233 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:16:08.010459 systemd[1]: Stopped target network.target - Network. Mar 14 00:16:08.010922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:16:08.010984 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:16:08.011463 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:16:08.011953 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:16:08.014459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:16:08.015041 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:16:08.015935 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:16:08.016981 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:16:08.017035 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:16:08.018459 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:16:08.018509 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:16:08.018985 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:16:08.019050 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:16:08.019681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:16:08.019739 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:16:08.020609 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:16:08.021219 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:16:08.023789 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:16:08.026514 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:16:08.026638 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:16:08.026690 systemd-networkd[1169]: eth0: DHCPv6 lease lost Mar 14 00:16:08.028278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:16:08.028364 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:16:08.030328 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:16:08.030444 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:16:08.031012 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:16:08.031142 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:16:08.035799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:16:08.035895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:16:08.040992 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:16:08.041508 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:16:08.041584 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:16:08.042151 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:16:08.042208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:08.045250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:16:08.045311 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:16:08.045909 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:16:08.045967 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:16:08.046650 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:16:08.056604 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:16:08.056796 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:16:08.060164 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:16:08.060242 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:16:08.061070 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:16:08.061115 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:16:08.061741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:16:08.061803 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:16:08.062868 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:16:08.062943 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:16:08.064153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:16:08.064214 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:08.075183 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:16:08.075849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:16:08.075954 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:16:08.076797 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:16:08.076864 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:16:08.077582 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:16:08.077641 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:16:08.079977 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:08.080042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:08.081656 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:16:08.081789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:16:08.084825 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:16:08.085801 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:16:08.087532 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:16:08.092201 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:16:08.109849 systemd[1]: Switching root. Mar 14 00:16:08.144202 systemd-journald[179]: Journal stopped Mar 14 00:16:09.622861 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 14 00:16:09.623022 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:16:09.623085 kernel: SELinux: policy capability open_perms=1 Mar 14 00:16:09.623110 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:16:09.623128 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:16:09.623165 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:16:09.623186 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:16:09.623205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:16:09.623225 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:16:09.623244 kernel: audit: type=1403 audit(1773447368.489:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:16:09.623271 systemd[1]: Successfully loaded SELinux policy in 43.073ms. Mar 14 00:16:09.623301 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.795ms. Mar 14 00:16:09.623324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:16:09.623346 systemd[1]: Detected virtualization amazon. Mar 14 00:16:09.623367 systemd[1]: Detected architecture x86-64. Mar 14 00:16:09.623388 systemd[1]: Detected first boot. Mar 14 00:16:09.623410 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:16:09.623432 zram_generator::config[1456]: No configuration found. Mar 14 00:16:09.623455 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:16:09.623486 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:16:09.623507 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:16:09.623529 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:16:09.623553 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:16:09.623575 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:16:09.623596 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:16:09.623619 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:16:09.623642 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:16:09.623666 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:16:09.623687 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:16:09.623707 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:16:09.623727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:16:09.623747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:16:09.623766 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:16:09.623786 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:16:09.623807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:16:09.623827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:16:09.623851 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:16:09.623871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:16:09.623906 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:16:09.623927 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:16:09.623947 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:16:09.623967 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:16:09.623986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:16:09.624006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:16:09.624038 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:16:09.624057 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:16:09.624076 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:16:09.624094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:16:09.624115 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:16:09.624135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:16:09.624154 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:16:09.624174 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:16:09.624194 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:16:09.624218 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:16:09.624237 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:16:09.624258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:09.624278 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:16:09.624300 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:16:09.624319 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:16:09.624339 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:16:09.624359 systemd[1]: Reached target machines.target - Containers. Mar 14 00:16:09.624382 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:16:09.624402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:09.624421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:16:09.624446 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:16:09.624466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:16:09.624485 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:16:09.624505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:16:09.624524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:16:09.624544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:16:09.624567 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:16:09.624587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:16:09.624608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:16:09.624627 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:16:09.624647 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:16:09.624666 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:16:09.624686 kernel: loop: module loaded Mar 14 00:16:09.624705 kernel: fuse: init (API version 7.39) Mar 14 00:16:09.624724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:16:09.624747 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:16:09.624766 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:16:09.624785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:16:09.624803 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:16:09.624820 systemd[1]: Stopped verity-setup.service. Mar 14 00:16:09.624839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:09.624856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:16:09.626917 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:16:09.626965 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:16:09.627000 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:16:09.627021 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:16:09.627043 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:16:09.627066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:16:09.627089 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:16:09.627151 systemd-journald[1539]: Collecting audit messages is disabled. Mar 14 00:16:09.627190 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:16:09.627211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:16:09.627231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:16:09.627254 systemd-journald[1539]: Journal started Mar 14 00:16:09.627300 systemd-journald[1539]: Runtime Journal (/run/log/journal/ec2db0c805e6a600923848cc62d5c2cb) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:16:09.284734 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:16:09.301911 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:16:09.302329 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:16:09.636294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:16:09.636359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:16:09.636387 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:16:09.638610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:16:09.638820 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:16:09.640909 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:16:09.641094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:16:09.642308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:16:09.647798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:16:09.670012 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:16:09.672386 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:16:09.686473 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:16:09.699009 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:16:09.699703 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:16:09.699751 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:16:09.703108 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:16:09.707021 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:16:09.710224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:16:09.711301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:09.715967 kernel: ACPI: bus type drm_connector registered Mar 14 00:16:09.718165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:16:09.725096 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:16:09.725833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:16:09.732285 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:16:09.733051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:16:09.735096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:16:09.741161 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:16:09.745080 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:16:09.749177 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:16:09.750232 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:16:09.750439 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:16:09.751714 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:16:09.753473 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:16:09.758181 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:16:09.778027 systemd-journald[1539]: Time spent on flushing to /var/log/journal/ec2db0c805e6a600923848cc62d5c2cb is 148.336ms for 987 entries. Mar 14 00:16:09.778027 systemd-journald[1539]: System Journal (/var/log/journal/ec2db0c805e6a600923848cc62d5c2cb) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:16:09.944283 systemd-journald[1539]: Received client request to flush runtime journal. Mar 14 00:16:09.944357 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:16:09.812946 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:16:09.816234 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:16:09.828642 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:16:09.856813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:09.905269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:16:09.919239 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:16:09.941924 systemd-tmpfiles[1585]: ACLs are not supported, ignoring. Mar 14 00:16:09.941948 systemd-tmpfiles[1585]: ACLs are not supported, ignoring. Mar 14 00:16:09.951217 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:16:09.955699 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:16:09.957404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:16:09.959632 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:16:09.980230 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:16:09.981668 udevadm[1598]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:16:10.012853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:16:10.042336 kernel: loop1: detected capacity change from 0 to 61336 Mar 14 00:16:10.059458 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:16:10.068219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:16:10.102651 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Mar 14 00:16:10.103421 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Mar 14 00:16:10.126169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:16:10.179920 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:16:10.303267 kernel: loop3: detected capacity change from 0 to 217752 Mar 14 00:16:10.660947 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:16:10.699915 kernel: loop5: detected capacity change from 0 to 61336 Mar 14 00:16:10.718009 kernel: loop6: detected capacity change from 0 to 142488 Mar 14 00:16:10.728242 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:16:10.735237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:16:10.748909 kernel: loop7: detected capacity change from 0 to 217752 Mar 14 00:16:10.766807 systemd-udevd[1617]: Using default interface naming scheme 'v255'. Mar 14 00:16:10.771711 (sd-merge)[1615]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:16:10.772573 (sd-merge)[1615]: Merged extensions into '/usr'. Mar 14 00:16:10.781182 systemd[1]: Reloading requested from client PID 1584 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:16:10.781204 systemd[1]: Reloading... Mar 14 00:16:10.948914 zram_generator::config[1658]: No configuration found. Mar 14 00:16:10.998357 (udev-worker)[1619]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:16:11.033827 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 14 00:16:11.051924 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 14 00:16:11.059290 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:16:11.070039 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:16:11.070116 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 14 00:16:11.083913 kernel: ACPI: button: Sleep Button [SLPF] Mar 14 00:16:11.219933 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:16:11.226124 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1627) Mar 14 00:16:11.273079 ldconfig[1579]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:16:11.286073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:11.416294 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:16:11.416531 systemd[1]: Reloading finished in 634 ms. Mar 14 00:16:11.449720 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:16:11.450534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:16:11.451307 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:16:11.466045 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:16:11.484971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:16:11.490134 systemd[1]: Starting ensure-sysext.service... Mar 14 00:16:11.493109 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:16:11.501075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:16:11.508454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:16:11.513518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:16:11.517537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:11.537230 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:16:11.538081 systemd[1]: Reloading requested from client PID 1804 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:16:11.538093 systemd[1]: Reloading... Mar 14 00:16:11.586470 lvm[1805]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:16:11.680049 systemd-tmpfiles[1808]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:16:11.680579 systemd-tmpfiles[1808]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:16:11.684406 systemd-tmpfiles[1808]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:16:11.684853 systemd-tmpfiles[1808]: ACLs are not supported, ignoring. Mar 14 00:16:11.685641 systemd-tmpfiles[1808]: ACLs are not supported, ignoring. Mar 14 00:16:11.693726 zram_generator::config[1844]: No configuration found. Mar 14 00:16:11.702974 systemd-tmpfiles[1808]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:16:11.703153 systemd-tmpfiles[1808]: Skipping /boot Mar 14 00:16:11.732754 systemd-tmpfiles[1808]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:16:11.733458 systemd-tmpfiles[1808]: Skipping /boot Mar 14 00:16:11.801246 systemd-networkd[1807]: lo: Link UP Mar 14 00:16:11.801260 systemd-networkd[1807]: lo: Gained carrier Mar 14 00:16:11.804754 systemd-networkd[1807]: Enumeration completed Mar 14 00:16:11.806467 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:11.806477 systemd-networkd[1807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:16:11.809012 systemd-networkd[1807]: eth0: Link UP Mar 14 00:16:11.809277 systemd-networkd[1807]: eth0: Gained carrier Mar 14 00:16:11.809301 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:11.818951 systemd-networkd[1807]: eth0: DHCPv4 address 172.31.23.179/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:16:11.871973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:11.949187 systemd[1]: Reloading finished in 410 ms. Mar 14 00:16:11.968244 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:16:11.969085 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:16:11.980694 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:16:11.981716 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:16:11.982876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:16:11.984179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:11.993791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:16:11.999258 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:12.013024 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:16:12.017056 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:16:12.029330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:16:12.036265 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:16:12.039629 lvm[1906]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:16:12.048932 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:16:12.058255 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:16:12.065171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:12.065481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:12.077655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:16:12.087332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:16:12.091215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:16:12.092761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:12.092997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:12.095508 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:16:12.108739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:12.111122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:12.111402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:12.111540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:12.121543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:12.123974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:12.125793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:16:12.128175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:12.128455 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:16:12.129483 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:12.130694 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:16:12.132704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:16:12.140360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:16:12.140955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:16:12.142762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:16:12.144843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:16:12.146275 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:16:12.146481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:16:12.155130 systemd[1]: Finished ensure-sysext.service. Mar 14 00:16:12.174622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:16:12.175322 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:16:12.177203 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:16:12.189478 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:16:12.192130 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:16:12.206079 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:16:12.208304 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:16:12.215748 augenrules[1940]: No rules Mar 14 00:16:12.218677 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:12.223281 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:16:12.233507 systemd-resolved[1911]: Positive Trust Anchors: Mar 14 00:16:12.233528 systemd-resolved[1911]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:16:12.233576 systemd-resolved[1911]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:16:12.238671 systemd-resolved[1911]: Defaulting to hostname 'linux'. Mar 14 00:16:12.240670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:16:12.241242 systemd[1]: Reached target network.target - Network. Mar 14 00:16:12.241678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:16:12.242093 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:16:12.242575 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:16:12.243034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:16:12.243574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:16:12.244146 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:16:12.244507 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:16:12.244849 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:16:12.244906 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:16:12.245267 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:16:12.246567 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:16:12.248514 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:16:12.258266 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:16:12.259409 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:16:12.259979 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:16:12.260511 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:16:12.260959 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:16:12.261001 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:16:12.262108 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:16:12.267077 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:16:12.271445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:16:12.273325 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:16:12.277074 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:16:12.277983 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:16:12.289096 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:16:12.292835 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:16:12.300071 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:16:12.309630 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:16:12.324106 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:16:12.335093 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:16:12.339521 jq[1952]: false Mar 14 00:16:12.342947 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:16:12.343772 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:16:12.344514 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:16:12.351805 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:16:12.385816 extend-filesystems[1953]: Found loop4 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found loop5 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found loop6 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found loop7 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p1 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p2 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p3 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found usr Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p4 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p6 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p7 Mar 14 00:16:12.385816 extend-filesystems[1953]: Found nvme0n1p9 Mar 14 00:16:12.385816 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Mar 14 00:16:12.373353 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:16:12.440294 dbus-daemon[1951]: [system] SELinux support is enabled Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: ---------------------------------------------------- Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: corporation. Support and training for ntp-4 are Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: available at https://www.nwtime.org/support Mar 14 00:16:12.462357 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: ---------------------------------------------------- Mar 14 00:16:12.379344 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:16:12.459631 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:16:12.379572 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:16:12.459657 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:16:12.473146 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: proto: precision = 0.066 usec (-24) Mar 14 00:16:12.473146 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: basedate set to 2026-03-01 Mar 14 00:16:12.473146 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: gps base set to 2026-03-01 (week 2408) Mar 14 00:16:12.473265 jq[1964]: true Mar 14 00:16:12.404445 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:16:12.459669 ntpd[1955]: ---------------------------------------------------- Mar 14 00:16:12.405077 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:16:12.459679 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:16:12.477080 jq[1982]: true Mar 14 00:16:12.448325 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:16:12.459689 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:16:12.461511 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:16:12.459700 ntpd[1955]: corporation. Support and training for ntp-4 are Mar 14 00:16:12.461553 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:16:12.459711 ntpd[1955]: available at https://www.nwtime.org/support Mar 14 00:16:12.462057 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:16:12.459722 ntpd[1955]: ---------------------------------------------------- Mar 14 00:16:12.462084 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Listen normally on 3 eth0 172.31.23.179:123 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Listen normally on 4 lo [::1]:123 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: bind(21) AF_INET6 fe80::477:97ff:fee4:5551%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: unable to create socket on eth0 (5) for fe80::477:97ff:fee4:5551%2#123 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: failed to init interface for address fe80::477:97ff:fee4:5551%2 Mar 14 00:16:12.489205 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Mar 14 00:16:12.465585 ntpd[1955]: proto: precision = 0.066 usec (-24) Mar 14 00:16:12.467346 ntpd[1955]: basedate set to 2026-03-01 Mar 14 00:16:12.467366 ntpd[1955]: gps base set to 2026-03-01 (week 2408) Mar 14 00:16:12.471830 dbus-daemon[1951]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1807 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:16:12.481698 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:16:12.483337 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:16:12.483400 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:16:12.486084 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:16:12.486129 ntpd[1955]: Listen normally on 3 eth0 172.31.23.179:123 Mar 14 00:16:12.486170 ntpd[1955]: Listen normally on 4 lo [::1]:123 Mar 14 00:16:12.486221 ntpd[1955]: bind(21) AF_INET6 fe80::477:97ff:fee4:5551%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:16:12.486243 ntpd[1955]: unable to create socket on eth0 (5) for fe80::477:97ff:fee4:5551%2#123 Mar 14 00:16:12.486257 ntpd[1955]: failed to init interface for address fe80::477:97ff:fee4:5551%2 Mar 14 00:16:12.486302 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Mar 14 00:16:12.494175 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:16:12.502275 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:16:12.504026 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:16:12.504026 ntpd[1955]: 14 Mar 00:16:12 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:16:12.502313 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:16:12.507903 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Mar 14 00:16:12.518056 tar[1969]: linux-amd64/LICENSE Mar 14 00:16:12.518056 tar[1969]: linux-amd64/helm Mar 14 00:16:12.518445 extend-filesystems[2000]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:16:12.530812 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:16:12.532118 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:16:12.551189 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:16:12.572890 (ntainerd)[1999]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:16:12.584959 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:16:12.596906 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1650) Mar 14 00:16:12.596983 update_engine[1963]: I20260314 00:16:12.595104 1963 main.cc:92] Flatcar Update Engine starting Mar 14 00:16:12.602932 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:16:12.614630 update_engine[1963]: I20260314 00:16:12.607314 1963 update_check_scheduler.cc:74] Next update check in 3m57s Mar 14 00:16:12.613617 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:16:12.720908 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:16:12.733605 coreos-metadata[1950]: Mar 14 00:16:12.733 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.734 INFO Fetch successful Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.735 INFO Fetch successful Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.738 INFO Fetch successful Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.739 INFO Fetch successful Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.739 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.740 INFO Fetch failed with 404: resource not found Mar 14 00:16:12.740699 coreos-metadata[1950]: Mar 14 00:16:12.740 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:16:12.741219 coreos-metadata[1950]: Mar 14 00:16:12.740 INFO Fetch successful Mar 14 00:16:12.741219 coreos-metadata[1950]: Mar 14 00:16:12.740 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:16:12.741314 coreos-metadata[1950]: Mar 14 00:16:12.741 INFO Fetch successful Mar 14 00:16:12.741314 coreos-metadata[1950]: Mar 14 00:16:12.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:16:12.742971 coreos-metadata[1950]: Mar 14 00:16:12.741 INFO Fetch successful Mar 14 00:16:12.742971 coreos-metadata[1950]: Mar 14 00:16:12.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:16:12.743289 coreos-metadata[1950]: Mar 14 00:16:12.743 INFO Fetch successful Mar 14 00:16:12.743289 coreos-metadata[1950]: Mar 14 00:16:12.743 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:16:12.749765 coreos-metadata[1950]: Mar 14 00:16:12.744 INFO Fetch successful Mar 14 00:16:12.747321 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:16:12.749933 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:16:12.749933 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:16:12.749933 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:16:12.747564 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:16:12.775853 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:16:12.806107 bash[2034]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:16:12.764958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:16:12.785300 systemd[1]: Starting sshkeys.service... Mar 14 00:16:12.867621 systemd-logind[1962]: Watching system buttons on /dev/input/event2 (Power Button) Mar 14 00:16:12.867657 systemd-logind[1962]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 14 00:16:12.867682 systemd-logind[1962]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:16:12.871344 systemd-logind[1962]: New seat seat0. Mar 14 00:16:12.877604 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:16:12.890991 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:16:12.893343 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:16:12.925116 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:16:12.936333 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:16:13.131359 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:16:13.132583 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:16:13.143814 dbus-daemon[1951]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1998 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:16:13.158335 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:16:13.159370 locksmithd[2019]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:16:13.220296 polkitd[2138]: Started polkitd version 121 Mar 14 00:16:13.236516 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:16:13.236620 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:16:13.238600 polkitd[2138]: Finished loading, compiling and executing 2 rules Mar 14 00:16:13.242256 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:16:13.242625 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:16:13.243388 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:16:13.245901 coreos-metadata[2090]: Mar 14 00:16:13.245 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:16:13.249570 coreos-metadata[2090]: Mar 14 00:16:13.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:16:13.250370 coreos-metadata[2090]: Mar 14 00:16:13.250 INFO Fetch successful Mar 14 00:16:13.250370 coreos-metadata[2090]: Mar 14 00:16:13.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:16:13.252571 coreos-metadata[2090]: Mar 14 00:16:13.250 INFO Fetch successful Mar 14 00:16:13.254345 unknown[2090]: wrote ssh authorized keys file for user: core Mar 14 00:16:13.286138 systemd-hostnamed[1998]: Hostname set to (transient) Mar 14 00:16:13.286259 systemd-resolved[1911]: System hostname changed to 'ip-172-31-23-179'. Mar 14 00:16:13.305508 update-ssh-keys[2148]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:16:13.310680 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:16:13.320481 systemd[1]: Finished sshkeys.service. Mar 14 00:16:13.387457 containerd[1999]: time="2026-03-14T00:16:13.387293281Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:16:13.460137 ntpd[1955]: bind(24) AF_INET6 fe80::477:97ff:fee4:5551%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:16:13.463255 ntpd[1955]: 14 Mar 00:16:13 ntpd[1955]: bind(24) AF_INET6 fe80::477:97ff:fee4:5551%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:16:13.463255 ntpd[1955]: 14 Mar 00:16:13 ntpd[1955]: unable to create socket on eth0 (6) for fe80::477:97ff:fee4:5551%2#123 Mar 14 00:16:13.463255 ntpd[1955]: 14 Mar 00:16:13 ntpd[1955]: failed to init interface for address fe80::477:97ff:fee4:5551%2 Mar 14 00:16:13.460185 ntpd[1955]: unable to create socket on eth0 (6) for fe80::477:97ff:fee4:5551%2#123 Mar 14 00:16:13.460201 ntpd[1955]: failed to init interface for address fe80::477:97ff:fee4:5551%2 Mar 14 00:16:13.500867 containerd[1999]: time="2026-03-14T00:16:13.500804687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.507848 containerd[1999]: time="2026-03-14T00:16:13.507788269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:13.507848 containerd[1999]: time="2026-03-14T00:16:13.507843526Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:16:13.508030 containerd[1999]: time="2026-03-14T00:16:13.507865365Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:16:13.508231 containerd[1999]: time="2026-03-14T00:16:13.508204754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:16:13.508302 containerd[1999]: time="2026-03-14T00:16:13.508238114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508360 containerd[1999]: time="2026-03-14T00:16:13.508326109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508360 containerd[1999]: time="2026-03-14T00:16:13.508344733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508618 containerd[1999]: time="2026-03-14T00:16:13.508592596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508674 containerd[1999]: time="2026-03-14T00:16:13.508619145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508674 containerd[1999]: time="2026-03-14T00:16:13.508637949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508674 containerd[1999]: time="2026-03-14T00:16:13.508653023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.508799 containerd[1999]: time="2026-03-14T00:16:13.508757005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.511134 containerd[1999]: time="2026-03-14T00:16:13.511102783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:13.511328 containerd[1999]: time="2026-03-14T00:16:13.511301538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:13.511500 containerd[1999]: time="2026-03-14T00:16:13.511330307Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:16:13.511500 containerd[1999]: time="2026-03-14T00:16:13.511436575Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:16:13.511781 containerd[1999]: time="2026-03-14T00:16:13.511497078Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:16:13.518429 containerd[1999]: time="2026-03-14T00:16:13.518254240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:16:13.518429 containerd[1999]: time="2026-03-14T00:16:13.518328904Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:16:13.518429 containerd[1999]: time="2026-03-14T00:16:13.518352488Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:16:13.518429 containerd[1999]: time="2026-03-14T00:16:13.518380730Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:16:13.518429 containerd[1999]: time="2026-03-14T00:16:13.518406701Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:16:13.518682 containerd[1999]: time="2026-03-14T00:16:13.518589745Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.519956314Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520137668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520161911Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520180988Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520202455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520222288Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520239488Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520258789Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520278139Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520296085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520313520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520331384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520357993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.521897 containerd[1999]: time="2026-03-14T00:16:13.520377278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520396625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520424065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520444465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520463424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520482477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520500942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520519446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520540475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520557988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520576776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520596478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520620360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520650309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520667611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.522472 containerd[1999]: time="2026-03-14T00:16:13.520683662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520756353Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520781833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520800243Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520818505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520833603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520851500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.520867180Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:16:13.523026 containerd[1999]: time="2026-03-14T00:16:13.522801450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:16:13.523792 containerd[1999]: time="2026-03-14T00:16:13.523706054Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:16:13.524034 containerd[1999]: time="2026-03-14T00:16:13.523804219Z" level=info msg="Connect containerd service" Mar 14 00:16:13.524034 containerd[1999]: time="2026-03-14T00:16:13.523858938Z" level=info msg="using legacy CRI server" Mar 14 00:16:13.524034 containerd[1999]: time="2026-03-14T00:16:13.523873416Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:16:13.524139 containerd[1999]: time="2026-03-14T00:16:13.524056185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527210742Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527377862Z" level=info msg="Start subscribing containerd event" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527445199Z" level=info msg="Start recovering state" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527531704Z" level=info msg="Start event monitor" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527560838Z" level=info msg="Start snapshots syncer" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527573779Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527584757Z" level=info msg="Start streaming server" Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527668426Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:16:13.530917 containerd[1999]: time="2026-03-14T00:16:13.527722250Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:16:13.527927 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:16:13.531582 containerd[1999]: time="2026-03-14T00:16:13.531548800Z" level=info msg="containerd successfully booted in 0.147130s" Mar 14 00:16:13.558110 systemd-networkd[1807]: eth0: Gained IPv6LL Mar 14 00:16:13.564730 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:16:13.567903 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:16:13.577399 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:16:13.588252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:13.594327 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:16:13.677472 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:16:13.708854 amazon-ssm-agent[2156]: Initializing new seelog logger Mar 14 00:16:13.710290 amazon-ssm-agent[2156]: New Seelog Logger Creation Complete Mar 14 00:16:13.710290 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.710290 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.710290 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 processing appconfig overrides Mar 14 00:16:13.713932 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.713932 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.713932 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 processing appconfig overrides Mar 14 00:16:13.713932 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.713932 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.713932 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 processing appconfig overrides Mar 14 00:16:13.714921 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO Proxy environment variables: Mar 14 00:16:13.719686 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.719820 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:16:13.720093 amazon-ssm-agent[2156]: 2026/03/14 00:16:13 processing appconfig overrides Mar 14 00:16:13.818900 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO https_proxy: Mar 14 00:16:13.917944 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO http_proxy: Mar 14 00:16:13.971136 tar[1969]: linux-amd64/README.md Mar 14 00:16:14.004909 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:16:14.016321 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO no_proxy: Mar 14 00:16:14.114956 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:16:14.177567 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:16:14.209823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:16:14.213487 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:16:14.217275 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:16:14.224970 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:16:14.225242 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:16:14.232234 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:16:14.245380 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:16:14.249365 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:16:14.258079 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:16:14.258858 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:16:14.273853 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO Agent will take identity from EC2 Mar 14 00:16:14.273853 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:16:14.273853 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:16:14.273853 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:16:14.273853 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [Registrar] Starting registrar module Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:14 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:14 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:14 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:16:14.274183 amazon-ssm-agent[2156]: 2026-03-14 00:16:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:16:14.312574 amazon-ssm-agent[2156]: 2026-03-14 00:16:14 INFO [CredentialRefresher] Next credential rotation will be in 32.333326126766664 minutes Mar 14 00:16:14.561955 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:16:14.567237 systemd[1]: Started sshd@0-172.31.23.179:22-68.220.241.50:46642.service - OpenSSH per-connection server daemon (68.220.241.50:46642). Mar 14 00:16:15.068603 sshd[2194]: Accepted publickey for core from 68.220.241.50 port 46642 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:15.071231 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:15.082666 systemd-logind[1962]: New session 1 of user core. Mar 14 00:16:15.083512 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:16:15.089280 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:16:15.108524 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:16:15.119278 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:16:15.124709 (systemd)[2198]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:16:15.240420 systemd[2198]: Queued start job for default target default.target. Mar 14 00:16:15.246148 systemd[2198]: Created slice app.slice - User Application Slice. Mar 14 00:16:15.246191 systemd[2198]: Reached target paths.target - Paths. Mar 14 00:16:15.246213 systemd[2198]: Reached target timers.target - Timers. Mar 14 00:16:15.247733 systemd[2198]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:16:15.261229 systemd[2198]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:16:15.261387 systemd[2198]: Reached target sockets.target - Sockets. Mar 14 00:16:15.261410 systemd[2198]: Reached target basic.target - Basic System. Mar 14 00:16:15.261466 systemd[2198]: Reached target default.target - Main User Target. Mar 14 00:16:15.261507 systemd[2198]: Startup finished in 129ms. Mar 14 00:16:15.261634 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:16:15.269161 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:16:15.300635 amazon-ssm-agent[2156]: 2026-03-14 00:16:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:16:15.402766 amazon-ssm-agent[2156]: 2026-03-14 00:16:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2208) started Mar 14 00:16:15.503087 amazon-ssm-agent[2156]: 2026-03-14 00:16:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:16:15.633292 systemd[1]: Started sshd@1-172.31.23.179:22-68.220.241.50:46654.service - OpenSSH per-connection server daemon (68.220.241.50:46654). Mar 14 00:16:16.112201 sshd[2220]: Accepted publickey for core from 68.220.241.50 port 46654 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:16.113687 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:16.121760 systemd-logind[1962]: New session 2 of user core. Mar 14 00:16:16.124146 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:16:16.460246 ntpd[1955]: Listen normally on 7 eth0 [fe80::477:97ff:fee4:5551%2]:123 Mar 14 00:16:16.460621 ntpd[1955]: 14 Mar 00:16:16 ntpd[1955]: Listen normally on 7 eth0 [fe80::477:97ff:fee4:5551%2]:123 Mar 14 00:16:16.461482 sshd[2220]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:16.465965 systemd[1]: sshd@1-172.31.23.179:22-68.220.241.50:46654.service: Deactivated successfully. Mar 14 00:16:16.467928 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:16:16.468654 systemd-logind[1962]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:16:16.469918 systemd-logind[1962]: Removed session 2. Mar 14 00:16:16.551276 systemd[1]: Started sshd@2-172.31.23.179:22-68.220.241.50:46662.service - OpenSSH per-connection server daemon (68.220.241.50:46662). Mar 14 00:16:17.029758 sshd[2227]: Accepted publickey for core from 68.220.241.50 port 46662 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:17.031318 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:17.035832 systemd-logind[1962]: New session 3 of user core. Mar 14 00:16:17.042141 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:16:17.378306 sshd[2227]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:17.382322 systemd[1]: sshd@2-172.31.23.179:22-68.220.241.50:46662.service: Deactivated successfully. Mar 14 00:16:17.384506 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:16:17.385979 systemd-logind[1962]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:16:17.387413 systemd-logind[1962]: Removed session 3. Mar 14 00:16:18.790529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:18.792733 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:16:18.793928 systemd[1]: Startup finished in 683ms (kernel) + 8.793s (initrd) + 10.344s (userspace) = 19.821s. Mar 14 00:16:18.804965 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:20.605243 systemd-resolved[1911]: Clock change detected. Flushing caches. Mar 14 00:16:21.785610 kubelet[2238]: E0314 00:16:21.785515 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:21.788248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:21.788614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:28.621854 systemd[1]: Started sshd@3-172.31.23.179:22-68.220.241.50:42038.service - OpenSSH per-connection server daemon (68.220.241.50:42038). Mar 14 00:16:29.151365 sshd[2250]: Accepted publickey for core from 68.220.241.50 port 42038 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:29.152559 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:29.157769 systemd-logind[1962]: New session 4 of user core. Mar 14 00:16:29.167562 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:16:29.531129 sshd[2250]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:29.535539 systemd[1]: sshd@3-172.31.23.179:22-68.220.241.50:42038.service: Deactivated successfully. Mar 14 00:16:29.537636 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:16:29.538369 systemd-logind[1962]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:16:29.539417 systemd-logind[1962]: Removed session 4. Mar 14 00:16:29.620880 systemd[1]: Started sshd@4-172.31.23.179:22-68.220.241.50:42048.service - OpenSSH per-connection server daemon (68.220.241.50:42048). Mar 14 00:16:30.099367 sshd[2257]: Accepted publickey for core from 68.220.241.50 port 42048 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:30.100250 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:30.105665 systemd-logind[1962]: New session 5 of user core. Mar 14 00:16:30.112697 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:16:30.443598 sshd[2257]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:30.448312 systemd[1]: sshd@4-172.31.23.179:22-68.220.241.50:42048.service: Deactivated successfully. Mar 14 00:16:30.450637 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:16:30.451578 systemd-logind[1962]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:16:30.452843 systemd-logind[1962]: Removed session 5. Mar 14 00:16:30.533705 systemd[1]: Started sshd@5-172.31.23.179:22-68.220.241.50:42052.service - OpenSSH per-connection server daemon (68.220.241.50:42052). Mar 14 00:16:31.012294 sshd[2264]: Accepted publickey for core from 68.220.241.50 port 42052 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:31.013916 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:31.019073 systemd-logind[1962]: New session 6 of user core. Mar 14 00:16:31.028666 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:16:31.361255 sshd[2264]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:31.364739 systemd[1]: sshd@5-172.31.23.179:22-68.220.241.50:42052.service: Deactivated successfully. Mar 14 00:16:31.367037 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:16:31.368443 systemd-logind[1962]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:16:31.369898 systemd-logind[1962]: Removed session 6. Mar 14 00:16:31.467710 systemd[1]: Started sshd@6-172.31.23.179:22-68.220.241.50:42060.service - OpenSSH per-connection server daemon (68.220.241.50:42060). Mar 14 00:16:31.831078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:16:31.839602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:31.948543 sshd[2271]: Accepted publickey for core from 68.220.241.50 port 42060 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:31.950142 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:31.956789 systemd-logind[1962]: New session 7 of user core. Mar 14 00:16:31.963535 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:16:32.048292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:32.059854 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:32.110184 kubelet[2282]: E0314 00:16:32.110053 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:32.114611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:32.114819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:32.233573 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:16:32.233982 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:32.246148 sudo[2289]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:32.324790 sshd[2271]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:32.328108 systemd[1]: sshd@6-172.31.23.179:22-68.220.241.50:42060.service: Deactivated successfully. Mar 14 00:16:32.330253 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:16:32.332177 systemd-logind[1962]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:16:32.333745 systemd-logind[1962]: Removed session 7. Mar 14 00:16:32.414713 systemd[1]: Started sshd@7-172.31.23.179:22-68.220.241.50:57898.service - OpenSSH per-connection server daemon (68.220.241.50:57898). Mar 14 00:16:32.904896 sshd[2294]: Accepted publickey for core from 68.220.241.50 port 57898 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:32.906471 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:32.911671 systemd-logind[1962]: New session 8 of user core. Mar 14 00:16:32.918557 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:16:33.182298 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:16:33.182729 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:33.186931 sudo[2298]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:33.192669 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:16:33.193053 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:33.213776 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:33.216014 auditctl[2301]: No rules Mar 14 00:16:33.216637 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:16:33.216862 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:33.219667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:33.263546 augenrules[2319]: No rules Mar 14 00:16:33.265173 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:33.266393 sudo[2297]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:33.345008 sshd[2294]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:33.349535 systemd[1]: sshd@7-172.31.23.179:22-68.220.241.50:57898.service: Deactivated successfully. Mar 14 00:16:33.351702 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:16:33.352547 systemd-logind[1962]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:16:33.353696 systemd-logind[1962]: Removed session 8. Mar 14 00:16:33.435666 systemd[1]: Started sshd@8-172.31.23.179:22-68.220.241.50:57914.service - OpenSSH per-connection server daemon (68.220.241.50:57914). Mar 14 00:16:33.922357 sshd[2327]: Accepted publickey for core from 68.220.241.50 port 57914 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:16:33.923311 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:33.928657 systemd-logind[1962]: New session 9 of user core. Mar 14 00:16:33.934527 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:16:34.198367 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:16:34.198769 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:34.999696 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:16:35.001632 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:16:35.598881 dockerd[2345]: time="2026-03-14T00:16:35.598818135Z" level=info msg="Starting up" Mar 14 00:16:35.721691 dockerd[2345]: time="2026-03-14T00:16:35.721639045Z" level=info msg="Loading containers: start." Mar 14 00:16:35.845354 kernel: Initializing XFRM netlink socket Mar 14 00:16:35.873484 (udev-worker)[2372]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:16:35.928715 systemd-networkd[1807]: docker0: Link UP Mar 14 00:16:35.949105 dockerd[2345]: time="2026-03-14T00:16:35.949058413Z" level=info msg="Loading containers: done." Mar 14 00:16:35.981455 dockerd[2345]: time="2026-03-14T00:16:35.981400687Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:16:35.981643 dockerd[2345]: time="2026-03-14T00:16:35.981537678Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:16:35.981718 dockerd[2345]: time="2026-03-14T00:16:35.981695756Z" level=info msg="Daemon has completed initialization" Mar 14 00:16:36.014420 dockerd[2345]: time="2026-03-14T00:16:36.014361636Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:16:36.014887 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:16:37.829117 containerd[1999]: time="2026-03-14T00:16:37.829079075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:16:38.406864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519954004.mount: Deactivated successfully. Mar 14 00:16:40.018085 containerd[1999]: time="2026-03-14T00:16:40.018027539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:40.019457 containerd[1999]: time="2026-03-14T00:16:40.019407510Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 14 00:16:40.020828 containerd[1999]: time="2026-03-14T00:16:40.020262587Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:40.023275 containerd[1999]: time="2026-03-14T00:16:40.023236923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:40.024772 containerd[1999]: time="2026-03-14T00:16:40.024728181Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.195606506s" Mar 14 00:16:40.024869 containerd[1999]: time="2026-03-14T00:16:40.024780397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:16:40.025410 containerd[1999]: time="2026-03-14T00:16:40.025379359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:16:41.931027 containerd[1999]: time="2026-03-14T00:16:41.930970796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:41.932628 containerd[1999]: time="2026-03-14T00:16:41.932352900Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 14 00:16:41.934459 containerd[1999]: time="2026-03-14T00:16:41.934248971Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:41.937802 containerd[1999]: time="2026-03-14T00:16:41.937743143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:41.939524 containerd[1999]: time="2026-03-14T00:16:41.938889187Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.913472823s" Mar 14 00:16:41.939524 containerd[1999]: time="2026-03-14T00:16:41.938933926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:16:41.939692 containerd[1999]: time="2026-03-14T00:16:41.939651757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:16:42.331038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:16:42.337121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:42.546015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:42.558852 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:42.609062 kubelet[2556]: E0314 00:16:42.608899 2556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:42.612494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:42.612705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:43.255763 containerd[1999]: time="2026-03-14T00:16:43.255715119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:43.257192 containerd[1999]: time="2026-03-14T00:16:43.257142827Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 14 00:16:43.257911 containerd[1999]: time="2026-03-14T00:16:43.257858845Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:43.260796 containerd[1999]: time="2026-03-14T00:16:43.260745214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:43.262179 containerd[1999]: time="2026-03-14T00:16:43.262016710Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.322291336s" Mar 14 00:16:43.262179 containerd[1999]: time="2026-03-14T00:16:43.262080782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:16:43.262821 containerd[1999]: time="2026-03-14T00:16:43.262701946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:16:44.467399 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:16:44.570270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065761143.mount: Deactivated successfully. Mar 14 00:16:44.979418 containerd[1999]: time="2026-03-14T00:16:44.979356810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:44.981356 containerd[1999]: time="2026-03-14T00:16:44.981285286Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 14 00:16:44.983837 containerd[1999]: time="2026-03-14T00:16:44.983749275Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:44.987264 containerd[1999]: time="2026-03-14T00:16:44.987221181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:44.988551 containerd[1999]: time="2026-03-14T00:16:44.987924424Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.724997606s" Mar 14 00:16:44.988551 containerd[1999]: time="2026-03-14T00:16:44.987970061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:16:44.988862 containerd[1999]: time="2026-03-14T00:16:44.988836422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:16:45.578265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3111390116.mount: Deactivated successfully. Mar 14 00:16:47.251048 containerd[1999]: time="2026-03-14T00:16:47.250990429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.252782 containerd[1999]: time="2026-03-14T00:16:47.252561178Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 14 00:16:47.254917 containerd[1999]: time="2026-03-14T00:16:47.254559067Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.258898 containerd[1999]: time="2026-03-14T00:16:47.258705286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.260110 containerd[1999]: time="2026-03-14T00:16:47.259954039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.271083454s" Mar 14 00:16:47.260110 containerd[1999]: time="2026-03-14T00:16:47.259995232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:16:47.261171 containerd[1999]: time="2026-03-14T00:16:47.261138556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:16:47.734077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945980677.mount: Deactivated successfully. Mar 14 00:16:47.745110 containerd[1999]: time="2026-03-14T00:16:47.745055636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.747058 containerd[1999]: time="2026-03-14T00:16:47.746901576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:16:47.749226 containerd[1999]: time="2026-03-14T00:16:47.749150096Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.752902 containerd[1999]: time="2026-03-14T00:16:47.752837704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.754017 containerd[1999]: time="2026-03-14T00:16:47.753643215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 492.464089ms" Mar 14 00:16:47.754017 containerd[1999]: time="2026-03-14T00:16:47.753685534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:16:47.754501 containerd[1999]: time="2026-03-14T00:16:47.754473703Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:16:48.288524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752891070.mount: Deactivated successfully. Mar 14 00:16:49.432822 containerd[1999]: time="2026-03-14T00:16:49.432758617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:49.434845 containerd[1999]: time="2026-03-14T00:16:49.434617990Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 14 00:16:49.437185 containerd[1999]: time="2026-03-14T00:16:49.436848632Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:49.440827 containerd[1999]: time="2026-03-14T00:16:49.440769018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:49.441914 containerd[1999]: time="2026-03-14T00:16:49.441876532Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.687369386s" Mar 14 00:16:49.442005 containerd[1999]: time="2026-03-14T00:16:49.441921312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:16:50.711515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:50.719719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:50.757550 systemd[1]: Reloading requested from client PID 2725 ('systemctl') (unit session-9.scope)... Mar 14 00:16:50.757571 systemd[1]: Reloading... Mar 14 00:16:50.874380 zram_generator::config[2768]: No configuration found. Mar 14 00:16:51.013193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:51.112517 systemd[1]: Reloading finished in 354 ms. Mar 14 00:16:51.167037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:16:51.167158 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:16:51.167465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:51.171821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:51.670264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:51.681828 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:51.728422 kubelet[2825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:52.157809 kubelet[2825]: I0314 00:16:52.156576 2825 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:16:52.157809 kubelet[2825]: I0314 00:16:52.156640 2825 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:52.157809 kubelet[2825]: I0314 00:16:52.156664 2825 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:16:52.157809 kubelet[2825]: I0314 00:16:52.156672 2825 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:52.158195 kubelet[2825]: I0314 00:16:52.157990 2825 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:16:52.173824 kubelet[2825]: I0314 00:16:52.172803 2825 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:52.176794 kubelet[2825]: E0314 00:16:52.176729 2825 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.179:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:16:52.183018 kubelet[2825]: E0314 00:16:52.182968 2825 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:52.183159 kubelet[2825]: I0314 00:16:52.183053 2825 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:52.185758 kubelet[2825]: I0314 00:16:52.185705 2825 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:16:52.198857 kubelet[2825]: I0314 00:16:52.197809 2825 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:52.198857 kubelet[2825]: I0314 00:16:52.197906 2825 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-179","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:16:52.198857 kubelet[2825]: I0314 00:16:52.198188 2825 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:16:52.198857 kubelet[2825]: I0314 00:16:52.198203 2825 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:16:52.199132 kubelet[2825]: I0314 00:16:52.198343 2825 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:16:52.203287 kubelet[2825]: I0314 00:16:52.203252 2825 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:16:52.203540 kubelet[2825]: I0314 00:16:52.203513 2825 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:16:52.203540 kubelet[2825]: I0314 00:16:52.203541 2825 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:52.203685 kubelet[2825]: I0314 00:16:52.203595 2825 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:16:52.203685 kubelet[2825]: I0314 00:16:52.203611 2825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:52.207886 kubelet[2825]: I0314 00:16:52.207859 2825 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:52.211147 kubelet[2825]: I0314 00:16:52.210967 2825 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:52.211147 kubelet[2825]: I0314 00:16:52.211018 2825 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:16:52.213838 kubelet[2825]: W0314 00:16:52.212863 2825 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:16:52.217345 kubelet[2825]: I0314 00:16:52.215963 2825 server.go:1257] "Started kubelet" Mar 14 00:16:52.218880 kubelet[2825]: I0314 00:16:52.218827 2825 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:52.220435 kubelet[2825]: I0314 00:16:52.220267 2825 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:16:52.237291 kubelet[2825]: I0314 00:16:52.236470 2825 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:52.237291 kubelet[2825]: I0314 00:16:52.236896 2825 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:16:52.237291 kubelet[2825]: I0314 00:16:52.237186 2825 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:52.238820 kubelet[2825]: I0314 00:16:52.238486 2825 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:16:52.243211 kubelet[2825]: E0314 00:16:52.241130 2825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.179:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-179.189c8d0678f75f84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-179,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-179,},FirstTimestamp:2026-03-14 00:16:52.215930756 +0000 UTC m=+0.529706414,LastTimestamp:2026-03-14 00:16:52.215930756 +0000 UTC m=+0.529706414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-179,}" Mar 14 00:16:52.244549 kubelet[2825]: I0314 00:16:52.243768 2825 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:52.247358 kubelet[2825]: E0314 00:16:52.246537 2825 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-179\" not found" Mar 14 00:16:52.247358 kubelet[2825]: I0314 00:16:52.246575 2825 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:16:52.247358 kubelet[2825]: I0314 00:16:52.246837 2825 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:16:52.247358 kubelet[2825]: I0314 00:16:52.246893 2825 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:16:52.247948 kubelet[2825]: E0314 00:16:52.247903 2825 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": dial tcp 172.31.23.179:6443: connect: connection refused" interval="200ms" Mar 14 00:16:52.249371 kubelet[2825]: I0314 00:16:52.249043 2825 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:52.250580 kubelet[2825]: I0314 00:16:52.250561 2825 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:52.251148 kubelet[2825]: I0314 00:16:52.250679 2825 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:52.252896 kubelet[2825]: E0314 00:16:52.251100 2825 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:52.270355 kubelet[2825]: I0314 00:16:52.270250 2825 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:16:52.270355 kubelet[2825]: I0314 00:16:52.270268 2825 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:52.271051 kubelet[2825]: I0314 00:16:52.271028 2825 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:16:52.274450 kubelet[2825]: I0314 00:16:52.274412 2825 policy_none.go:50] "Start" Mar 14 00:16:52.274807 kubelet[2825]: I0314 00:16:52.274788 2825 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:16:52.274933 kubelet[2825]: I0314 00:16:52.274919 2825 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:16:52.276416 kubelet[2825]: I0314 00:16:52.276385 2825 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:52.279602 kubelet[2825]: I0314 00:16:52.279579 2825 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:52.279602 kubelet[2825]: I0314 00:16:52.279604 2825 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:16:52.279722 kubelet[2825]: I0314 00:16:52.279631 2825 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:16:52.279898 kubelet[2825]: E0314 00:16:52.279873 2825 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:52.279898 kubelet[2825]: I0314 00:16:52.279701 2825 policy_none.go:44] "Start" Mar 14 00:16:52.289346 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:16:52.303591 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:16:52.307424 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:16:52.315371 kubelet[2825]: E0314 00:16:52.315342 2825 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:52.315872 kubelet[2825]: I0314 00:16:52.315821 2825 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:16:52.316069 kubelet[2825]: I0314 00:16:52.315837 2825 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:52.318216 kubelet[2825]: E0314 00:16:52.318141 2825 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:52.318216 kubelet[2825]: E0314 00:16:52.318206 2825 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-179\" not found" Mar 14 00:16:52.318892 kubelet[2825]: I0314 00:16:52.318805 2825 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:16:52.392174 systemd[1]: Created slice kubepods-burstable-pod0b0ba5285573e3eb16c1e7c2aef1c88a.slice - libcontainer container kubepods-burstable-pod0b0ba5285573e3eb16c1e7c2aef1c88a.slice. Mar 14 00:16:52.411409 kubelet[2825]: E0314 00:16:52.411088 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:52.416692 systemd[1]: Created slice kubepods-burstable-podbabc6e578363e9fbccf20793bf283114.slice - libcontainer container kubepods-burstable-podbabc6e578363e9fbccf20793bf283114.slice. Mar 14 00:16:52.418472 kubelet[2825]: I0314 00:16:52.418431 2825 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-179" Mar 14 00:16:52.418808 kubelet[2825]: E0314 00:16:52.418780 2825 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.179:6443/api/v1/nodes\": dial tcp 172.31.23.179:6443: connect: connection refused" node="ip-172-31-23-179" Mar 14 00:16:52.425895 kubelet[2825]: E0314 00:16:52.425826 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:52.428678 systemd[1]: Created slice kubepods-burstable-pod1d4221712bc09a2585f9f17eecd62776.slice - libcontainer container kubepods-burstable-pod1d4221712bc09a2585f9f17eecd62776.slice. Mar 14 00:16:52.431004 kubelet[2825]: E0314 00:16:52.430979 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:52.448729 kubelet[2825]: I0314 00:16:52.448232 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b0ba5285573e3eb16c1e7c2aef1c88a-ca-certs\") pod \"kube-apiserver-ip-172-31-23-179\" (UID: \"0b0ba5285573e3eb16c1e7c2aef1c88a\") " pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:52.448729 kubelet[2825]: I0314 00:16:52.448346 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b0ba5285573e3eb16c1e7c2aef1c88a-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-179\" (UID: \"0b0ba5285573e3eb16c1e7c2aef1c88a\") " pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:52.448729 kubelet[2825]: I0314 00:16:52.448379 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:52.448729 kubelet[2825]: I0314 00:16:52.448401 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:52.448729 kubelet[2825]: I0314 00:16:52.448440 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b0ba5285573e3eb16c1e7c2aef1c88a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-179\" (UID: \"0b0ba5285573e3eb16c1e7c2aef1c88a\") " pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:52.448968 kubelet[2825]: I0314 00:16:52.448463 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:52.448968 kubelet[2825]: I0314 00:16:52.448500 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:52.448968 kubelet[2825]: I0314 00:16:52.448528 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:52.448968 kubelet[2825]: I0314 00:16:52.448550 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d4221712bc09a2585f9f17eecd62776-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-179\" (UID: \"1d4221712bc09a2585f9f17eecd62776\") " pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:52.448968 kubelet[2825]: E0314 00:16:52.448627 2825 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": dial tcp 172.31.23.179:6443: connect: connection refused" interval="400ms" Mar 14 00:16:52.621443 kubelet[2825]: I0314 00:16:52.621410 2825 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-179" Mar 14 00:16:52.621870 kubelet[2825]: E0314 00:16:52.621818 2825 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.179:6443/api/v1/nodes\": dial tcp 172.31.23.179:6443: connect: connection refused" node="ip-172-31-23-179" Mar 14 00:16:52.714884 containerd[1999]: time="2026-03-14T00:16:52.714537422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-179,Uid:0b0ba5285573e3eb16c1e7c2aef1c88a,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:52.734156 containerd[1999]: time="2026-03-14T00:16:52.734101306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-179,Uid:babc6e578363e9fbccf20793bf283114,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:52.736041 containerd[1999]: time="2026-03-14T00:16:52.735993305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-179,Uid:1d4221712bc09a2585f9f17eecd62776,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:52.849431 kubelet[2825]: E0314 00:16:52.849375 2825 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": dial tcp 172.31.23.179:6443: connect: connection refused" interval="800ms" Mar 14 00:16:53.023564 kubelet[2825]: I0314 00:16:53.023443 2825 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-179" Mar 14 00:16:53.023875 kubelet[2825]: E0314 00:16:53.023832 2825 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.179:6443/api/v1/nodes\": dial tcp 172.31.23.179:6443: connect: connection refused" node="ip-172-31-23-179" Mar 14 00:16:53.196720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807825131.mount: Deactivated successfully. Mar 14 00:16:53.204032 containerd[1999]: time="2026-03-14T00:16:53.203769982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.205564 containerd[1999]: time="2026-03-14T00:16:53.205511404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:16:53.206308 containerd[1999]: time="2026-03-14T00:16:53.206271414Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.207356 containerd[1999]: time="2026-03-14T00:16:53.207159145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.208044 containerd[1999]: time="2026-03-14T00:16:53.207993040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:53.209265 containerd[1999]: time="2026-03-14T00:16:53.209231286Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.210043 containerd[1999]: time="2026-03-14T00:16:53.209964158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:53.213345 containerd[1999]: time="2026-03-14T00:16:53.212764489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.214316 containerd[1999]: time="2026-03-14T00:16:53.214281178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.652843ms" Mar 14 00:16:53.215394 containerd[1999]: time="2026-03-14T00:16:53.215362615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.176398ms" Mar 14 00:16:53.217916 containerd[1999]: time="2026-03-14T00:16:53.217380363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.300404ms" Mar 14 00:16:53.389495 containerd[1999]: time="2026-03-14T00:16:53.388817687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:53.389495 containerd[1999]: time="2026-03-14T00:16:53.388912425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:53.389495 containerd[1999]: time="2026-03-14T00:16:53.388936221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.389495 containerd[1999]: time="2026-03-14T00:16:53.389051565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.393529 containerd[1999]: time="2026-03-14T00:16:53.393211433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:53.393529 containerd[1999]: time="2026-03-14T00:16:53.393309157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:53.393529 containerd[1999]: time="2026-03-14T00:16:53.393345098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.393529 containerd[1999]: time="2026-03-14T00:16:53.393461566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.396212 containerd[1999]: time="2026-03-14T00:16:53.395617903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:53.396403 containerd[1999]: time="2026-03-14T00:16:53.395927139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:53.396403 containerd[1999]: time="2026-03-14T00:16:53.395967092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.396403 containerd[1999]: time="2026-03-14T00:16:53.396113096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.430609 systemd[1]: Started cri-containerd-76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149.scope - libcontainer container 76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149. Mar 14 00:16:53.443723 systemd[1]: Started cri-containerd-2ff7b2ace3a66d09668710ba6ec5722cd2e23c4897560ea37c03636fa91d212e.scope - libcontainer container 2ff7b2ace3a66d09668710ba6ec5722cd2e23c4897560ea37c03636fa91d212e. Mar 14 00:16:53.451597 systemd[1]: Started cri-containerd-590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5.scope - libcontainer container 590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5. Mar 14 00:16:53.531409 containerd[1999]: time="2026-03-14T00:16:53.531363665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-179,Uid:1d4221712bc09a2585f9f17eecd62776,Namespace:kube-system,Attempt:0,} returns sandbox id \"76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149\"" Mar 14 00:16:53.556753 containerd[1999]: time="2026-03-14T00:16:53.556585147Z" level=info msg="CreateContainer within sandbox \"76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:16:53.561702 containerd[1999]: time="2026-03-14T00:16:53.561629665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-179,Uid:0b0ba5285573e3eb16c1e7c2aef1c88a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ff7b2ace3a66d09668710ba6ec5722cd2e23c4897560ea37c03636fa91d212e\"" Mar 14 00:16:53.567742 containerd[1999]: time="2026-03-14T00:16:53.567632028Z" level=info msg="CreateContainer within sandbox \"2ff7b2ace3a66d09668710ba6ec5722cd2e23c4897560ea37c03636fa91d212e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:16:53.568721 containerd[1999]: time="2026-03-14T00:16:53.568618229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-179,Uid:babc6e578363e9fbccf20793bf283114,Namespace:kube-system,Attempt:0,} returns sandbox id \"590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5\"" Mar 14 00:16:53.575604 containerd[1999]: time="2026-03-14T00:16:53.575560900Z" level=info msg="CreateContainer within sandbox \"590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:16:53.650090 kubelet[2825]: E0314 00:16:53.649945 2825 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": dial tcp 172.31.23.179:6443: connect: connection refused" interval="1.6s" Mar 14 00:16:53.668352 containerd[1999]: time="2026-03-14T00:16:53.668223845Z" level=info msg="CreateContainer within sandbox \"76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16\"" Mar 14 00:16:53.669587 containerd[1999]: time="2026-03-14T00:16:53.669533219Z" level=info msg="CreateContainer within sandbox \"2ff7b2ace3a66d09668710ba6ec5722cd2e23c4897560ea37c03636fa91d212e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1baa645ed778ea66987c29249bf85fa17cfe2cd2b58d0237e5e1aff1269b658f\"" Mar 14 00:16:53.669850 containerd[1999]: time="2026-03-14T00:16:53.669817182Z" level=info msg="StartContainer for \"1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16\"" Mar 14 00:16:53.670849 containerd[1999]: time="2026-03-14T00:16:53.670717733Z" level=info msg="CreateContainer within sandbox \"590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459\"" Mar 14 00:16:53.671556 containerd[1999]: time="2026-03-14T00:16:53.671525851Z" level=info msg="StartContainer for \"57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459\"" Mar 14 00:16:53.673087 containerd[1999]: time="2026-03-14T00:16:53.673044833Z" level=info msg="StartContainer for \"1baa645ed778ea66987c29249bf85fa17cfe2cd2b58d0237e5e1aff1269b658f\"" Mar 14 00:16:53.714611 systemd[1]: Started cri-containerd-1baa645ed778ea66987c29249bf85fa17cfe2cd2b58d0237e5e1aff1269b658f.scope - libcontainer container 1baa645ed778ea66987c29249bf85fa17cfe2cd2b58d0237e5e1aff1269b658f. Mar 14 00:16:53.727786 systemd[1]: Started cri-containerd-1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16.scope - libcontainer container 1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16. Mar 14 00:16:53.755562 systemd[1]: Started cri-containerd-57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459.scope - libcontainer container 57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459. Mar 14 00:16:53.826988 kubelet[2825]: I0314 00:16:53.826695 2825 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-179" Mar 14 00:16:53.828487 kubelet[2825]: E0314 00:16:53.827094 2825 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.179:6443/api/v1/nodes\": dial tcp 172.31.23.179:6443: connect: connection refused" node="ip-172-31-23-179" Mar 14 00:16:53.828604 containerd[1999]: time="2026-03-14T00:16:53.828284952Z" level=info msg="StartContainer for \"57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459\" returns successfully" Mar 14 00:16:53.841665 containerd[1999]: time="2026-03-14T00:16:53.840921079Z" level=info msg="StartContainer for \"1baa645ed778ea66987c29249bf85fa17cfe2cd2b58d0237e5e1aff1269b658f\" returns successfully" Mar 14 00:16:53.867338 containerd[1999]: time="2026-03-14T00:16:53.867264624Z" level=info msg="StartContainer for \"1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16\" returns successfully" Mar 14 00:16:54.301774 kubelet[2825]: E0314 00:16:54.301745 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:54.306138 kubelet[2825]: E0314 00:16:54.306113 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:54.311528 kubelet[2825]: E0314 00:16:54.311480 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:55.314586 kubelet[2825]: E0314 00:16:55.314556 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:55.315372 kubelet[2825]: E0314 00:16:55.315351 2825 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:55.429544 kubelet[2825]: I0314 00:16:55.429504 2825 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-179" Mar 14 00:16:55.955532 kubelet[2825]: E0314 00:16:55.955475 2825 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-179\" not found" node="ip-172-31-23-179" Mar 14 00:16:56.050583 kubelet[2825]: E0314 00:16:56.050481 2825 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-179.189c8d0678f75f84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-179,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-179,},FirstTimestamp:2026-03-14 00:16:52.215930756 +0000 UTC m=+0.529706414,LastTimestamp:2026-03-14 00:16:52.215930756 +0000 UTC m=+0.529706414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-179,}" Mar 14 00:16:56.100734 kubelet[2825]: I0314 00:16:56.100656 2825 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-23-179" Mar 14 00:16:56.124042 kubelet[2825]: E0314 00:16:56.122940 2825 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-179.189c8d067b0f64dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-179,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-23-179,},FirstTimestamp:2026-03-14 00:16:52.25105942 +0000 UTC m=+0.564835077,LastTimestamp:2026-03-14 00:16:52.25105942 +0000 UTC m=+0.564835077,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-179,}" Mar 14 00:16:56.147997 kubelet[2825]: I0314 00:16:56.147957 2825 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:56.162065 kubelet[2825]: E0314 00:16:56.162027 2825 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:56.162065 kubelet[2825]: I0314 00:16:56.162065 2825 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:56.166757 kubelet[2825]: E0314 00:16:56.166711 2825 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:56.166757 kubelet[2825]: I0314 00:16:56.166747 2825 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:56.169161 kubelet[2825]: E0314 00:16:56.169126 2825 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:56.209489 kubelet[2825]: I0314 00:16:56.208529 2825 apiserver.go:52] "Watching apiserver" Mar 14 00:16:56.248435 kubelet[2825]: I0314 00:16:56.248396 2825 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:16:56.314617 kubelet[2825]: I0314 00:16:56.314586 2825 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:56.315105 kubelet[2825]: I0314 00:16:56.315001 2825 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:56.318494 kubelet[2825]: E0314 00:16:56.318459 2825 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:56.319426 kubelet[2825]: E0314 00:16:56.319383 2825 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:58.159655 systemd[1]: Reloading requested from client PID 3106 ('systemctl') (unit session-9.scope)... Mar 14 00:16:58.159675 systemd[1]: Reloading... Mar 14 00:16:58.284432 zram_generator::config[3147]: No configuration found. Mar 14 00:16:58.416009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:58.519253 systemd[1]: Reloading finished in 358 ms. Mar 14 00:16:58.565622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:58.589025 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:16:58.589373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:58.593781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:58.915255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:58.926847 (kubelet)[3206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:58.984455 kubelet[3206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:58.994705 kubelet[3206]: I0314 00:16:58.994650 3206 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:16:58.994705 kubelet[3206]: I0314 00:16:58.994696 3206 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:58.994705 kubelet[3206]: I0314 00:16:58.994717 3206 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:16:58.994926 kubelet[3206]: I0314 00:16:58.994724 3206 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:58.995105 kubelet[3206]: I0314 00:16:58.995083 3206 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:16:58.996469 kubelet[3206]: I0314 00:16:58.996433 3206 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:16:59.005900 kubelet[3206]: I0314 00:16:59.005545 3206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:59.008852 kubelet[3206]: E0314 00:16:59.008815 3206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:59.009053 kubelet[3206]: I0314 00:16:59.008882 3206 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:59.012902 kubelet[3206]: I0314 00:16:59.012871 3206 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:16:59.016346 kubelet[3206]: I0314 00:16:59.014832 3206 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:59.016346 kubelet[3206]: I0314 00:16:59.014882 3206 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-179","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:16:59.016346 kubelet[3206]: I0314 00:16:59.015218 3206 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:16:59.016346 kubelet[3206]: I0314 00:16:59.015230 3206 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:16:59.016665 kubelet[3206]: I0314 00:16:59.015269 3206 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:16:59.017614 kubelet[3206]: I0314 00:16:59.017594 3206 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:16:59.018015 kubelet[3206]: I0314 00:16:59.017990 3206 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:16:59.018015 kubelet[3206]: I0314 00:16:59.018017 3206 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:59.018132 kubelet[3206]: I0314 00:16:59.018039 3206 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:16:59.018132 kubelet[3206]: I0314 00:16:59.018053 3206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:59.031985 kubelet[3206]: I0314 00:16:59.027879 3206 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:59.031985 kubelet[3206]: I0314 00:16:59.029342 3206 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:59.031985 kubelet[3206]: I0314 00:16:59.029386 3206 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:16:59.046770 kubelet[3206]: I0314 00:16:59.046741 3206 server.go:1257] "Started kubelet" Mar 14 00:16:59.050353 kubelet[3206]: I0314 00:16:59.049441 3206 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:59.051770 kubelet[3206]: I0314 00:16:59.051184 3206 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:16:59.055540 kubelet[3206]: I0314 00:16:59.053600 3206 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:16:59.055540 kubelet[3206]: I0314 00:16:59.055149 3206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:59.059398 kubelet[3206]: I0314 00:16:59.058917 3206 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:59.059895 kubelet[3206]: I0314 00:16:59.059829 3206 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:16:59.060462 kubelet[3206]: I0314 00:16:59.060427 3206 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:59.064262 kubelet[3206]: I0314 00:16:59.064218 3206 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:16:59.065351 kubelet[3206]: I0314 00:16:59.065104 3206 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:16:59.065351 kubelet[3206]: I0314 00:16:59.065286 3206 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:16:59.069845 kubelet[3206]: I0314 00:16:59.066920 3206 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:59.073352 kubelet[3206]: I0314 00:16:59.072632 3206 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:59.073352 kubelet[3206]: I0314 00:16:59.072675 3206 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:59.101812 kubelet[3206]: I0314 00:16:59.101768 3206 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:59.117542 update_engine[1963]: I20260314 00:16:59.117486 1963 update_attempter.cc:509] Updating boot flags... Mar 14 00:16:59.132544 kubelet[3206]: I0314 00:16:59.131468 3206 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:59.132733 kubelet[3206]: I0314 00:16:59.132584 3206 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:16:59.132733 kubelet[3206]: I0314 00:16:59.132618 3206 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:16:59.133052 kubelet[3206]: E0314 00:16:59.133000 3206 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:59.193451 kubelet[3206]: I0314 00:16:59.193280 3206 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:16:59.195457 kubelet[3206]: I0314 00:16:59.195194 3206 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:59.195457 kubelet[3206]: I0314 00:16:59.195238 3206 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:16:59.195676 kubelet[3206]: I0314 00:16:59.195481 3206 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:16:59.200154 kubelet[3206]: I0314 00:16:59.195517 3206 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:16:59.200154 kubelet[3206]: I0314 00:16:59.198118 3206 policy_none.go:50] "Start" Mar 14 00:16:59.200154 kubelet[3206]: I0314 00:16:59.198134 3206 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:16:59.200154 kubelet[3206]: I0314 00:16:59.198151 3206 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:16:59.204925 kubelet[3206]: I0314 00:16:59.204162 3206 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:16:59.204925 kubelet[3206]: I0314 00:16:59.204200 3206 policy_none.go:44] "Start" Mar 14 00:16:59.224098 kubelet[3206]: E0314 00:16:59.223687 3206 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:59.229482 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3258) Mar 14 00:16:59.229591 kubelet[3206]: I0314 00:16:59.228668 3206 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:16:59.229591 kubelet[3206]: I0314 00:16:59.228687 3206 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:59.229591 kubelet[3206]: I0314 00:16:59.229294 3206 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:16:59.233542 kubelet[3206]: E0314 00:16:59.233294 3206 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:59.258913 kubelet[3206]: I0314 00:16:59.258055 3206 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:59.258913 kubelet[3206]: I0314 00:16:59.258804 3206 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:59.264644 kubelet[3206]: I0314 00:16:59.264621 3206 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:59.370499 kubelet[3206]: I0314 00:16:59.370467 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b0ba5285573e3eb16c1e7c2aef1c88a-ca-certs\") pod \"kube-apiserver-ip-172-31-23-179\" (UID: \"0b0ba5285573e3eb16c1e7c2aef1c88a\") " pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:59.372545 kubelet[3206]: I0314 00:16:59.372150 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b0ba5285573e3eb16c1e7c2aef1c88a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-179\" (UID: \"0b0ba5285573e3eb16c1e7c2aef1c88a\") " pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:59.372545 kubelet[3206]: I0314 00:16:59.372205 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:59.372545 kubelet[3206]: I0314 00:16:59.372234 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:59.372545 kubelet[3206]: I0314 00:16:59.372272 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:59.372545 kubelet[3206]: I0314 00:16:59.372302 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d4221712bc09a2585f9f17eecd62776-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-179\" (UID: \"1d4221712bc09a2585f9f17eecd62776\") " pod="kube-system/kube-scheduler-ip-172-31-23-179" Mar 14 00:16:59.372885 kubelet[3206]: I0314 00:16:59.372350 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b0ba5285573e3eb16c1e7c2aef1c88a-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-179\" (UID: \"0b0ba5285573e3eb16c1e7c2aef1c88a\") " pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:16:59.372885 kubelet[3206]: I0314 00:16:59.372374 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:59.372885 kubelet[3206]: I0314 00:16:59.372404 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/babc6e578363e9fbccf20793bf283114-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-179\" (UID: \"babc6e578363e9fbccf20793bf283114\") " pod="kube-system/kube-controller-manager-ip-172-31-23-179" Mar 14 00:16:59.390470 kubelet[3206]: I0314 00:16:59.389075 3206 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-179" Mar 14 00:16:59.425500 kubelet[3206]: I0314 00:16:59.425460 3206 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-23-179" Mar 14 00:16:59.426412 kubelet[3206]: I0314 00:16:59.426389 3206 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-23-179" Mar 14 00:16:59.544465 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3261) Mar 14 00:16:59.768609 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3261) Mar 14 00:17:00.025775 kubelet[3206]: I0314 00:17:00.025648 3206 apiserver.go:52] "Watching apiserver" Mar 14 00:17:00.066002 kubelet[3206]: I0314 00:17:00.065960 3206 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:17:00.186940 kubelet[3206]: I0314 00:17:00.186892 3206 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:17:00.195222 kubelet[3206]: E0314 00:17:00.195180 3206 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-179\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-179" Mar 14 00:17:00.227945 kubelet[3206]: I0314 00:17:00.227860 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-179" podStartSLOduration=1.224620313 podStartE2EDuration="1.224620313s" podCreationTimestamp="2026-03-14 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:00.206492777 +0000 UTC m=+1.273916016" watchObservedRunningTime="2026-03-14 00:17:00.224620313 +0000 UTC m=+1.292043533" Mar 14 00:17:00.269381 kubelet[3206]: I0314 00:17:00.268418 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-179" podStartSLOduration=1.2684010350000001 podStartE2EDuration="1.268401035s" podCreationTimestamp="2026-03-14 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:00.228066913 +0000 UTC m=+1.295490126" watchObservedRunningTime="2026-03-14 00:17:00.268401035 +0000 UTC m=+1.335824251" Mar 14 00:17:00.348460 kubelet[3206]: I0314 00:17:00.348070 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-179" podStartSLOduration=1.348048962 podStartE2EDuration="1.348048962s" podCreationTimestamp="2026-03-14 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:00.26886717 +0000 UTC m=+1.336290395" watchObservedRunningTime="2026-03-14 00:17:00.348048962 +0000 UTC m=+1.415472186" Mar 14 00:17:03.767543 kubelet[3206]: I0314 00:17:03.767427 3206 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:17:03.770065 containerd[1999]: time="2026-03-14T00:17:03.769220389Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:17:03.770644 kubelet[3206]: I0314 00:17:03.769723 3206 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:17:04.924571 systemd[1]: Created slice kubepods-besteffort-pod9d7418ba_3d63_44af_b3f0_4f6e2f3c9ee8.slice - libcontainer container kubepods-besteffort-pod9d7418ba_3d63_44af_b3f0_4f6e2f3c9ee8.slice. Mar 14 00:17:04.958918 systemd[1]: Created slice kubepods-besteffort-pod27977bf9_2de0_45bb_a582_e37a3728480b.slice - libcontainer container kubepods-besteffort-pod27977bf9_2de0_45bb_a582_e37a3728480b.slice. Mar 14 00:17:05.020411 kubelet[3206]: I0314 00:17:05.019229 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8-xtables-lock\") pod \"kube-proxy-2v6ml\" (UID: \"9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8\") " pod="kube-system/kube-proxy-2v6ml" Mar 14 00:17:05.021142 kubelet[3206]: I0314 00:17:05.020984 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8-lib-modules\") pod \"kube-proxy-2v6ml\" (UID: \"9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8\") " pod="kube-system/kube-proxy-2v6ml" Mar 14 00:17:05.021142 kubelet[3206]: I0314 00:17:05.021055 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8-kube-proxy\") pod \"kube-proxy-2v6ml\" (UID: \"9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8\") " pod="kube-system/kube-proxy-2v6ml" Mar 14 00:17:05.021142 kubelet[3206]: I0314 00:17:05.021094 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xjs\" (UniqueName: \"kubernetes.io/projected/9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8-kube-api-access-p2xjs\") pod \"kube-proxy-2v6ml\" (UID: \"9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8\") " pod="kube-system/kube-proxy-2v6ml" Mar 14 00:17:05.121504 kubelet[3206]: I0314 00:17:05.121465 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27977bf9-2de0-45bb-a582-e37a3728480b-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-kd78k\" (UID: \"27977bf9-2de0-45bb-a582-e37a3728480b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-kd78k" Mar 14 00:17:05.121504 kubelet[3206]: I0314 00:17:05.121511 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxvj\" (UniqueName: \"kubernetes.io/projected/27977bf9-2de0-45bb-a582-e37a3728480b-kube-api-access-zgxvj\") pod \"tigera-operator-6cf4cccc57-kd78k\" (UID: \"27977bf9-2de0-45bb-a582-e37a3728480b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-kd78k" Mar 14 00:17:05.245622 containerd[1999]: time="2026-03-14T00:17:05.245497721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v6ml,Uid:9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:05.271953 containerd[1999]: time="2026-03-14T00:17:05.271515494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-kd78k,Uid:27977bf9-2de0-45bb-a582-e37a3728480b,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:17:05.289392 containerd[1999]: time="2026-03-14T00:17:05.289246578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:05.289392 containerd[1999]: time="2026-03-14T00:17:05.289353348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:05.289392 containerd[1999]: time="2026-03-14T00:17:05.289370909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.289981 containerd[1999]: time="2026-03-14T00:17:05.289922450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.325577 systemd[1]: Started cri-containerd-5396754d082a164cca37279344b60f2cc440113b8ae6435fe4717a2f27568651.scope - libcontainer container 5396754d082a164cca37279344b60f2cc440113b8ae6435fe4717a2f27568651. Mar 14 00:17:05.337636 containerd[1999]: time="2026-03-14T00:17:05.337456827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:05.337636 containerd[1999]: time="2026-03-14T00:17:05.337592829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:05.337965 containerd[1999]: time="2026-03-14T00:17:05.337616774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.339739 containerd[1999]: time="2026-03-14T00:17:05.339484107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.373556 systemd[1]: Started cri-containerd-d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7.scope - libcontainer container d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7. Mar 14 00:17:05.378047 containerd[1999]: time="2026-03-14T00:17:05.377992448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v6ml,Uid:9d7418ba-3d63-44af-b3f0-4f6e2f3c9ee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5396754d082a164cca37279344b60f2cc440113b8ae6435fe4717a2f27568651\"" Mar 14 00:17:05.404635 containerd[1999]: time="2026-03-14T00:17:05.404580414Z" level=info msg="CreateContainer within sandbox \"5396754d082a164cca37279344b60f2cc440113b8ae6435fe4717a2f27568651\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:17:05.437402 containerd[1999]: time="2026-03-14T00:17:05.437048495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-kd78k,Uid:27977bf9-2de0-45bb-a582-e37a3728480b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7\"" Mar 14 00:17:05.441864 containerd[1999]: time="2026-03-14T00:17:05.441818357Z" level=info msg="CreateContainer within sandbox \"5396754d082a164cca37279344b60f2cc440113b8ae6435fe4717a2f27568651\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3969c31249f189bdeff9fbf38de865c0d84648af08f11c02da4111cd125a421\"" Mar 14 00:17:05.442662 containerd[1999]: time="2026-03-14T00:17:05.442625951Z" level=info msg="StartContainer for \"c3969c31249f189bdeff9fbf38de865c0d84648af08f11c02da4111cd125a421\"" Mar 14 00:17:05.450587 containerd[1999]: time="2026-03-14T00:17:05.450536580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:17:05.484659 systemd[1]: Started cri-containerd-c3969c31249f189bdeff9fbf38de865c0d84648af08f11c02da4111cd125a421.scope - libcontainer container c3969c31249f189bdeff9fbf38de865c0d84648af08f11c02da4111cd125a421. Mar 14 00:17:05.523853 containerd[1999]: time="2026-03-14T00:17:05.523546406Z" level=info msg="StartContainer for \"c3969c31249f189bdeff9fbf38de865c0d84648af08f11c02da4111cd125a421\" returns successfully" Mar 14 00:17:06.809409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001364536.mount: Deactivated successfully. Mar 14 00:17:08.102184 kubelet[3206]: I0314 00:17:08.102107 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2v6ml" podStartSLOduration=4.101977301 podStartE2EDuration="4.101977301s" podCreationTimestamp="2026-03-14 00:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:06.213133373 +0000 UTC m=+7.280556595" watchObservedRunningTime="2026-03-14 00:17:08.101977301 +0000 UTC m=+9.169400523" Mar 14 00:17:09.196140 containerd[1999]: time="2026-03-14T00:17:09.196088952Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:09.198137 containerd[1999]: time="2026-03-14T00:17:09.197959417Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:17:09.201231 containerd[1999]: time="2026-03-14T00:17:09.200537243Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:09.205923 containerd[1999]: time="2026-03-14T00:17:09.204870370Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:09.205923 containerd[1999]: time="2026-03-14T00:17:09.205781457Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.75508673s" Mar 14 00:17:09.205923 containerd[1999]: time="2026-03-14T00:17:09.205818324Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:17:09.213623 containerd[1999]: time="2026-03-14T00:17:09.213569670Z" level=info msg="CreateContainer within sandbox \"d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:17:09.236948 containerd[1999]: time="2026-03-14T00:17:09.236901498Z" level=info msg="CreateContainer within sandbox \"d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad\"" Mar 14 00:17:09.237552 containerd[1999]: time="2026-03-14T00:17:09.237473694Z" level=info msg="StartContainer for \"b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad\"" Mar 14 00:17:09.275057 systemd[1]: run-containerd-runc-k8s.io-b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad-runc.f0n51I.mount: Deactivated successfully. Mar 14 00:17:09.286591 systemd[1]: Started cri-containerd-b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad.scope - libcontainer container b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad. Mar 14 00:17:09.316000 containerd[1999]: time="2026-03-14T00:17:09.315951336Z" level=info msg="StartContainer for \"b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad\" returns successfully" Mar 14 00:17:10.220150 kubelet[3206]: I0314 00:17:10.219808 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-kd78k" podStartSLOduration=2.460945619 podStartE2EDuration="6.219790325s" podCreationTimestamp="2026-03-14 00:17:04 +0000 UTC" firstStartedPulling="2026-03-14 00:17:05.448099367 +0000 UTC m=+6.515522566" lastFinishedPulling="2026-03-14 00:17:09.20694405 +0000 UTC m=+10.274367272" observedRunningTime="2026-03-14 00:17:10.219169767 +0000 UTC m=+11.286592990" watchObservedRunningTime="2026-03-14 00:17:10.219790325 +0000 UTC m=+11.287213546" Mar 14 00:17:16.410635 sudo[2330]: pam_unix(sudo:session): session closed for user root Mar 14 00:17:16.493963 sshd[2327]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:16.500107 systemd[1]: sshd@8-172.31.23.179:22-68.220.241.50:57914.service: Deactivated successfully. Mar 14 00:17:16.503332 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:16.504695 systemd[1]: session-9.scope: Consumed 3.944s CPU time, 150.4M memory peak, 0B memory swap peak. Mar 14 00:17:16.507536 systemd-logind[1962]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:16.509476 systemd-logind[1962]: Removed session 9. Mar 14 00:17:20.281292 systemd[1]: Created slice kubepods-besteffort-pod1f76c497_7b65_47c2_b5bf_2f6033e5de2e.slice - libcontainer container kubepods-besteffort-pod1f76c497_7b65_47c2_b5bf_2f6033e5de2e.slice. Mar 14 00:17:20.328574 kubelet[3206]: I0314 00:17:20.328409 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f76c497-7b65-47c2-b5bf-2f6033e5de2e-typha-certs\") pod \"calico-typha-669c66c8d7-nvgl5\" (UID: \"1f76c497-7b65-47c2-b5bf-2f6033e5de2e\") " pod="calico-system/calico-typha-669c66c8d7-nvgl5" Mar 14 00:17:20.328574 kubelet[3206]: I0314 00:17:20.328468 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7tbx\" (UniqueName: \"kubernetes.io/projected/1f76c497-7b65-47c2-b5bf-2f6033e5de2e-kube-api-access-k7tbx\") pod \"calico-typha-669c66c8d7-nvgl5\" (UID: \"1f76c497-7b65-47c2-b5bf-2f6033e5de2e\") " pod="calico-system/calico-typha-669c66c8d7-nvgl5" Mar 14 00:17:20.328574 kubelet[3206]: I0314 00:17:20.328503 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f76c497-7b65-47c2-b5bf-2f6033e5de2e-tigera-ca-bundle\") pod \"calico-typha-669c66c8d7-nvgl5\" (UID: \"1f76c497-7b65-47c2-b5bf-2f6033e5de2e\") " pod="calico-system/calico-typha-669c66c8d7-nvgl5" Mar 14 00:17:20.398902 systemd[1]: Created slice kubepods-besteffort-pod04123e15_656e_49c0_a364_59bdb2358313.slice - libcontainer container kubepods-besteffort-pod04123e15_656e_49c0_a364_59bdb2358313.slice. Mar 14 00:17:20.430379 kubelet[3206]: I0314 00:17:20.429571 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-var-lib-calico\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430379 kubelet[3206]: I0314 00:17:20.429621 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-lib-modules\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430379 kubelet[3206]: I0314 00:17:20.429637 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-nodeproc\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430379 kubelet[3206]: I0314 00:17:20.429656 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-sys-fs\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430379 kubelet[3206]: I0314 00:17:20.429675 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-xtables-lock\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430639 kubelet[3206]: I0314 00:17:20.429712 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-cni-net-dir\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430639 kubelet[3206]: I0314 00:17:20.429732 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-policysync\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430639 kubelet[3206]: I0314 00:17:20.429753 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-var-run-calico\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430639 kubelet[3206]: I0314 00:17:20.429769 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqhp\" (UniqueName: \"kubernetes.io/projected/04123e15-656e-49c0-a364-59bdb2358313-kube-api-access-tkqhp\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430639 kubelet[3206]: I0314 00:17:20.429787 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-cni-bin-dir\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430800 kubelet[3206]: I0314 00:17:20.429803 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-bpffs\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430800 kubelet[3206]: I0314 00:17:20.429823 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-cni-log-dir\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430800 kubelet[3206]: I0314 00:17:20.429844 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/04123e15-656e-49c0-a364-59bdb2358313-flexvol-driver-host\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430800 kubelet[3206]: I0314 00:17:20.429862 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/04123e15-656e-49c0-a364-59bdb2358313-node-certs\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.430800 kubelet[3206]: I0314 00:17:20.429876 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04123e15-656e-49c0-a364-59bdb2358313-tigera-ca-bundle\") pod \"calico-node-8rb5w\" (UID: \"04123e15-656e-49c0-a364-59bdb2358313\") " pod="calico-system/calico-node-8rb5w" Mar 14 00:17:20.528346 kubelet[3206]: E0314 00:17:20.526909 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:20.532576 kubelet[3206]: E0314 00:17:20.532545 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.534418 kubelet[3206]: W0314 00:17:20.532731 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.534418 kubelet[3206]: E0314 00:17:20.532762 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.536114 kubelet[3206]: E0314 00:17:20.534825 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.536114 kubelet[3206]: W0314 00:17:20.534846 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.536114 kubelet[3206]: E0314 00:17:20.534869 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.536813 kubelet[3206]: E0314 00:17:20.536586 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.536813 kubelet[3206]: W0314 00:17:20.536617 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.536813 kubelet[3206]: E0314 00:17:20.536640 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.537400 kubelet[3206]: E0314 00:17:20.537209 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.537400 kubelet[3206]: W0314 00:17:20.537224 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.537400 kubelet[3206]: E0314 00:17:20.537259 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.538123 kubelet[3206]: E0314 00:17:20.537777 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.538123 kubelet[3206]: W0314 00:17:20.537818 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.538123 kubelet[3206]: E0314 00:17:20.537834 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.543074 kubelet[3206]: E0314 00:17:20.542904 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.543074 kubelet[3206]: W0314 00:17:20.542925 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.543074 kubelet[3206]: E0314 00:17:20.542947 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.543661 kubelet[3206]: E0314 00:17:20.543559 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.543661 kubelet[3206]: W0314 00:17:20.543589 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.543661 kubelet[3206]: E0314 00:17:20.543607 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.545697 kubelet[3206]: E0314 00:17:20.545544 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.545697 kubelet[3206]: W0314 00:17:20.545563 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.545697 kubelet[3206]: E0314 00:17:20.545583 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.551745 kubelet[3206]: E0314 00:17:20.551645 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.551745 kubelet[3206]: W0314 00:17:20.551684 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.551745 kubelet[3206]: E0314 00:17:20.551710 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.552432 kubelet[3206]: E0314 00:17:20.552409 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.552432 kubelet[3206]: W0314 00:17:20.552431 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.552652 kubelet[3206]: E0314 00:17:20.552451 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.553423 kubelet[3206]: E0314 00:17:20.553404 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.553423 kubelet[3206]: W0314 00:17:20.553421 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.553633 kubelet[3206]: E0314 00:17:20.553438 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.553718 kubelet[3206]: E0314 00:17:20.553684 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.553718 kubelet[3206]: W0314 00:17:20.553694 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.553718 kubelet[3206]: E0314 00:17:20.553708 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.553965 kubelet[3206]: E0314 00:17:20.553943 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.553965 kubelet[3206]: W0314 00:17:20.553952 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.554123 kubelet[3206]: E0314 00:17:20.553979 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.555051 kubelet[3206]: E0314 00:17:20.555033 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.555051 kubelet[3206]: W0314 00:17:20.555051 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.555294 kubelet[3206]: E0314 00:17:20.555066 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.555424 kubelet[3206]: E0314 00:17:20.555295 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.555424 kubelet[3206]: W0314 00:17:20.555304 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.555424 kubelet[3206]: E0314 00:17:20.555318 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.555584 kubelet[3206]: E0314 00:17:20.555553 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.555584 kubelet[3206]: W0314 00:17:20.555563 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.555584 kubelet[3206]: E0314 00:17:20.555575 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.556065 kubelet[3206]: E0314 00:17:20.555934 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.556065 kubelet[3206]: W0314 00:17:20.555946 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.556065 kubelet[3206]: E0314 00:17:20.555962 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.557050 kubelet[3206]: E0314 00:17:20.557033 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.557176 kubelet[3206]: W0314 00:17:20.557050 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.557176 kubelet[3206]: E0314 00:17:20.557082 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.557355 kubelet[3206]: E0314 00:17:20.557310 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.557355 kubelet[3206]: W0314 00:17:20.557332 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.557355 kubelet[3206]: E0314 00:17:20.557344 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.557574 kubelet[3206]: E0314 00:17:20.557553 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.557574 kubelet[3206]: W0314 00:17:20.557567 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.557747 kubelet[3206]: E0314 00:17:20.557579 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.558463 kubelet[3206]: E0314 00:17:20.558447 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.558463 kubelet[3206]: W0314 00:17:20.558462 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.558636 kubelet[3206]: E0314 00:17:20.558477 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.558778 kubelet[3206]: E0314 00:17:20.558694 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.558778 kubelet[3206]: W0314 00:17:20.558714 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.558778 kubelet[3206]: E0314 00:17:20.558726 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.560317 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.561821 kubelet[3206]: W0314 00:17:20.560348 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.560366 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.560633 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.561821 kubelet[3206]: W0314 00:17:20.560644 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.560657 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.560892 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.561821 kubelet[3206]: W0314 00:17:20.560902 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.560913 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.561821 kubelet[3206]: E0314 00:17:20.561124 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.562523 kubelet[3206]: W0314 00:17:20.561132 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.562523 kubelet[3206]: E0314 00:17:20.561144 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.575753 kubelet[3206]: E0314 00:17:20.575719 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.575753 kubelet[3206]: W0314 00:17:20.575750 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.575915 kubelet[3206]: E0314 00:17:20.575779 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.598109 containerd[1999]: time="2026-03-14T00:17:20.597976565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-669c66c8d7-nvgl5,Uid:1f76c497-7b65-47c2-b5bf-2f6033e5de2e,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:20.619185 kubelet[3206]: E0314 00:17:20.619001 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.619185 kubelet[3206]: W0314 00:17:20.619022 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.619185 kubelet[3206]: E0314 00:17:20.619045 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.623349 kubelet[3206]: E0314 00:17:20.620535 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.623349 kubelet[3206]: W0314 00:17:20.620557 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.623349 kubelet[3206]: E0314 00:17:20.620577 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.623349 kubelet[3206]: E0314 00:17:20.622557 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.623349 kubelet[3206]: W0314 00:17:20.622571 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.623349 kubelet[3206]: E0314 00:17:20.622588 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.623699 kubelet[3206]: E0314 00:17:20.623386 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.623699 kubelet[3206]: W0314 00:17:20.623397 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.623699 kubelet[3206]: E0314 00:17:20.623412 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.626377 kubelet[3206]: E0314 00:17:20.624877 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.626377 kubelet[3206]: W0314 00:17:20.624893 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.626377 kubelet[3206]: E0314 00:17:20.624908 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.626377 kubelet[3206]: E0314 00:17:20.625483 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.626377 kubelet[3206]: W0314 00:17:20.625497 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.626377 kubelet[3206]: E0314 00:17:20.625510 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.627038 kubelet[3206]: E0314 00:17:20.627014 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.627128 kubelet[3206]: W0314 00:17:20.627034 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.627128 kubelet[3206]: E0314 00:17:20.627058 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.627314 kubelet[3206]: E0314 00:17:20.627292 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.627314 kubelet[3206]: W0314 00:17:20.627307 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.627468 kubelet[3206]: E0314 00:17:20.627330 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.627609 kubelet[3206]: E0314 00:17:20.627593 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.627667 kubelet[3206]: W0314 00:17:20.627609 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.627667 kubelet[3206]: E0314 00:17:20.627622 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.628802 kubelet[3206]: E0314 00:17:20.627848 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.628802 kubelet[3206]: W0314 00:17:20.627861 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.628802 kubelet[3206]: E0314 00:17:20.627873 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.628802 kubelet[3206]: E0314 00:17:20.628289 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.628802 kubelet[3206]: W0314 00:17:20.628303 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.628802 kubelet[3206]: E0314 00:17:20.628316 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.630740 kubelet[3206]: E0314 00:17:20.628926 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.630740 kubelet[3206]: W0314 00:17:20.628937 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.630740 kubelet[3206]: E0314 00:17:20.628951 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.630740 kubelet[3206]: E0314 00:17:20.629531 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.630740 kubelet[3206]: W0314 00:17:20.629635 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.630740 kubelet[3206]: E0314 00:17:20.629650 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.630740 kubelet[3206]: E0314 00:17:20.630125 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.630740 kubelet[3206]: W0314 00:17:20.630136 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.630740 kubelet[3206]: E0314 00:17:20.630150 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.631103 kubelet[3206]: E0314 00:17:20.630789 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.631103 kubelet[3206]: W0314 00:17:20.630800 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.631103 kubelet[3206]: E0314 00:17:20.630815 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.631777 kubelet[3206]: E0314 00:17:20.631421 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.631777 kubelet[3206]: W0314 00:17:20.631439 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.631777 kubelet[3206]: E0314 00:17:20.631452 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.632125 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.635347 kubelet[3206]: W0314 00:17:20.632152 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.632166 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.632813 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.635347 kubelet[3206]: W0314 00:17:20.632824 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.632837 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.633444 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.635347 kubelet[3206]: W0314 00:17:20.633455 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.633469 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.635347 kubelet[3206]: E0314 00:17:20.634116 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.635837 kubelet[3206]: W0314 00:17:20.634127 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.635837 kubelet[3206]: E0314 00:17:20.634140 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.635927 kubelet[3206]: E0314 00:17:20.635908 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.635927 kubelet[3206]: W0314 00:17:20.635920 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.636003 kubelet[3206]: E0314 00:17:20.635935 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.636003 kubelet[3206]: I0314 00:17:20.635966 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69ad5735-6879-4268-98e4-4112168b29a6-kubelet-dir\") pod \"csi-node-driver-rwsn2\" (UID: \"69ad5735-6879-4268-98e4-4112168b29a6\") " pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:20.638421 kubelet[3206]: E0314 00:17:20.638358 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.638421 kubelet[3206]: W0314 00:17:20.638381 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.638421 kubelet[3206]: E0314 00:17:20.638400 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.638622 kubelet[3206]: I0314 00:17:20.638434 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/69ad5735-6879-4268-98e4-4112168b29a6-varrun\") pod \"csi-node-driver-rwsn2\" (UID: \"69ad5735-6879-4268-98e4-4112168b29a6\") " pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:20.640362 kubelet[3206]: E0314 00:17:20.639774 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.640362 kubelet[3206]: W0314 00:17:20.639793 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.640362 kubelet[3206]: E0314 00:17:20.639810 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.640362 kubelet[3206]: I0314 00:17:20.639914 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69ad5735-6879-4268-98e4-4112168b29a6-registration-dir\") pod \"csi-node-driver-rwsn2\" (UID: \"69ad5735-6879-4268-98e4-4112168b29a6\") " pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:20.640607 kubelet[3206]: E0314 00:17:20.640583 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.642475 kubelet[3206]: W0314 00:17:20.641392 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.642475 kubelet[3206]: E0314 00:17:20.641421 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.642475 kubelet[3206]: E0314 00:17:20.641756 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.642475 kubelet[3206]: W0314 00:17:20.641766 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.642475 kubelet[3206]: E0314 00:17:20.641779 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.642764 kubelet[3206]: E0314 00:17:20.642674 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.642764 kubelet[3206]: W0314 00:17:20.642688 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.642764 kubelet[3206]: E0314 00:17:20.642703 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.643242 kubelet[3206]: I0314 00:17:20.643190 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69ad5735-6879-4268-98e4-4112168b29a6-socket-dir\") pod \"csi-node-driver-rwsn2\" (UID: \"69ad5735-6879-4268-98e4-4112168b29a6\") " pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:20.644248 kubelet[3206]: E0314 00:17:20.643778 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.644248 kubelet[3206]: W0314 00:17:20.643793 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.644248 kubelet[3206]: E0314 00:17:20.643809 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.645926 kubelet[3206]: E0314 00:17:20.644710 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.645926 kubelet[3206]: W0314 00:17:20.644724 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.645926 kubelet[3206]: E0314 00:17:20.644738 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.645926 kubelet[3206]: E0314 00:17:20.645863 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.645926 kubelet[3206]: W0314 00:17:20.645876 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.645926 kubelet[3206]: E0314 00:17:20.645890 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.646208 kubelet[3206]: I0314 00:17:20.646109 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2mk\" (UniqueName: \"kubernetes.io/projected/69ad5735-6879-4268-98e4-4112168b29a6-kube-api-access-7f2mk\") pod \"csi-node-driver-rwsn2\" (UID: \"69ad5735-6879-4268-98e4-4112168b29a6\") " pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:20.647675 kubelet[3206]: E0314 00:17:20.647443 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.647675 kubelet[3206]: W0314 00:17:20.647459 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.647675 kubelet[3206]: E0314 00:17:20.647475 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.647846 kubelet[3206]: E0314 00:17:20.647756 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.647846 kubelet[3206]: W0314 00:17:20.647767 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.647846 kubelet[3206]: E0314 00:17:20.647780 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.648888 kubelet[3206]: E0314 00:17:20.648617 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.648888 kubelet[3206]: W0314 00:17:20.648631 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.648888 kubelet[3206]: E0314 00:17:20.648645 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.650485 kubelet[3206]: E0314 00:17:20.650467 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.650485 kubelet[3206]: W0314 00:17:20.650484 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.650596 kubelet[3206]: E0314 00:17:20.650500 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.653269 kubelet[3206]: E0314 00:17:20.652391 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.653269 kubelet[3206]: W0314 00:17:20.652410 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.653269 kubelet[3206]: E0314 00:17:20.652425 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.653269 kubelet[3206]: E0314 00:17:20.652715 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.653269 kubelet[3206]: W0314 00:17:20.652725 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.653269 kubelet[3206]: E0314 00:17:20.652738 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.688846 containerd[1999]: time="2026-03-14T00:17:20.688602195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:20.688846 containerd[1999]: time="2026-03-14T00:17:20.688669299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:20.688846 containerd[1999]: time="2026-03-14T00:17:20.688684595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:20.689105 containerd[1999]: time="2026-03-14T00:17:20.688948782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:20.714610 systemd[1]: Started cri-containerd-4b4d2371bb78b32bc6b6d481584baf82b39c3e1bd1d3a77690b0e1e766f59866.scope - libcontainer container 4b4d2371bb78b32bc6b6d481584baf82b39c3e1bd1d3a77690b0e1e766f59866. Mar 14 00:17:20.718776 containerd[1999]: time="2026-03-14T00:17:20.718730622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rb5w,Uid:04123e15-656e-49c0-a364-59bdb2358313,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:20.750469 kubelet[3206]: E0314 00:17:20.750441 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.751107 kubelet[3206]: W0314 00:17:20.750695 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.751107 kubelet[3206]: E0314 00:17:20.750721 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.751926 kubelet[3206]: E0314 00:17:20.751722 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.751926 kubelet[3206]: W0314 00:17:20.751755 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.751926 kubelet[3206]: E0314 00:17:20.751776 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.752923 kubelet[3206]: E0314 00:17:20.752829 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.752923 kubelet[3206]: W0314 00:17:20.752846 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.752923 kubelet[3206]: E0314 00:17:20.752866 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.753796 kubelet[3206]: E0314 00:17:20.753461 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.753796 kubelet[3206]: W0314 00:17:20.753477 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.753796 kubelet[3206]: E0314 00:17:20.753494 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.754895 kubelet[3206]: E0314 00:17:20.754522 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.754895 kubelet[3206]: W0314 00:17:20.754674 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.754895 kubelet[3206]: E0314 00:17:20.754692 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.758914 kubelet[3206]: E0314 00:17:20.758580 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.758914 kubelet[3206]: W0314 00:17:20.758605 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.758914 kubelet[3206]: E0314 00:17:20.758632 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.759560 kubelet[3206]: E0314 00:17:20.758946 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.759560 kubelet[3206]: W0314 00:17:20.758956 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.759560 kubelet[3206]: E0314 00:17:20.758969 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.759560 kubelet[3206]: E0314 00:17:20.759199 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.759560 kubelet[3206]: W0314 00:17:20.759210 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.759560 kubelet[3206]: E0314 00:17:20.759223 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.761500 kubelet[3206]: E0314 00:17:20.759582 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.761500 kubelet[3206]: W0314 00:17:20.759594 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.761500 kubelet[3206]: E0314 00:17:20.759607 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.761500 kubelet[3206]: E0314 00:17:20.759912 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.761500 kubelet[3206]: W0314 00:17:20.759941 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.761500 kubelet[3206]: E0314 00:17:20.759959 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.761500 kubelet[3206]: E0314 00:17:20.760419 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.761500 kubelet[3206]: W0314 00:17:20.760431 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.761500 kubelet[3206]: E0314 00:17:20.760443 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.763173 kubelet[3206]: E0314 00:17:20.761816 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.763173 kubelet[3206]: W0314 00:17:20.761827 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.763173 kubelet[3206]: E0314 00:17:20.761840 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.763173 kubelet[3206]: E0314 00:17:20.763052 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.763173 kubelet[3206]: W0314 00:17:20.763062 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.763173 kubelet[3206]: E0314 00:17:20.763072 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.769613 kubelet[3206]: E0314 00:17:20.768970 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.769613 kubelet[3206]: W0314 00:17:20.768995 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.769613 kubelet[3206]: E0314 00:17:20.769025 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.769613 kubelet[3206]: E0314 00:17:20.769434 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.769613 kubelet[3206]: W0314 00:17:20.769447 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.769613 kubelet[3206]: E0314 00:17:20.769466 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.770769 kubelet[3206]: E0314 00:17:20.770634 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.770769 kubelet[3206]: W0314 00:17:20.770647 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.770769 kubelet[3206]: E0314 00:17:20.770660 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.771477 kubelet[3206]: E0314 00:17:20.771455 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.771709 kubelet[3206]: W0314 00:17:20.771531 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.771709 kubelet[3206]: E0314 00:17:20.771549 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.772484 kubelet[3206]: E0314 00:17:20.772452 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.772709 kubelet[3206]: W0314 00:17:20.772614 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.772709 kubelet[3206]: E0314 00:17:20.772635 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.773169 kubelet[3206]: E0314 00:17:20.773158 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.773273 kubelet[3206]: W0314 00:17:20.773261 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.773387 kubelet[3206]: E0314 00:17:20.773373 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.773722 kubelet[3206]: E0314 00:17:20.773712 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.773866 kubelet[3206]: W0314 00:17:20.773843 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.773976 kubelet[3206]: E0314 00:17:20.773958 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.775278 kubelet[3206]: E0314 00:17:20.775264 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.775398 kubelet[3206]: W0314 00:17:20.775384 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.775488 kubelet[3206]: E0314 00:17:20.775475 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.775973 kubelet[3206]: E0314 00:17:20.775907 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.776560 kubelet[3206]: W0314 00:17:20.776543 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.776657 kubelet[3206]: E0314 00:17:20.776645 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.777212 kubelet[3206]: E0314 00:17:20.777125 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.777461 kubelet[3206]: W0314 00:17:20.777435 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.778495 kubelet[3206]: E0314 00:17:20.778479 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.779214 kubelet[3206]: E0314 00:17:20.779068 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.779214 kubelet[3206]: W0314 00:17:20.779082 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.779214 kubelet[3206]: E0314 00:17:20.779098 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.783488 kubelet[3206]: E0314 00:17:20.783426 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.783488 kubelet[3206]: W0314 00:17:20.783442 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.783488 kubelet[3206]: E0314 00:17:20.783456 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.790365 kubelet[3206]: E0314 00:17:20.789216 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:20.790516 kubelet[3206]: W0314 00:17:20.790493 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:20.790635 kubelet[3206]: E0314 00:17:20.790581 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:20.811015 containerd[1999]: time="2026-03-14T00:17:20.810083169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:20.811015 containerd[1999]: time="2026-03-14T00:17:20.810169941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:20.811015 containerd[1999]: time="2026-03-14T00:17:20.810190941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:20.814432 containerd[1999]: time="2026-03-14T00:17:20.814319106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-669c66c8d7-nvgl5,Uid:1f76c497-7b65-47c2-b5bf-2f6033e5de2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b4d2371bb78b32bc6b6d481584baf82b39c3e1bd1d3a77690b0e1e766f59866\"" Mar 14 00:17:20.815056 containerd[1999]: time="2026-03-14T00:17:20.814972021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:20.831863 containerd[1999]: time="2026-03-14T00:17:20.831662259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:17:20.849621 systemd[1]: Started cri-containerd-e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40.scope - libcontainer container e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40. Mar 14 00:17:20.896411 containerd[1999]: time="2026-03-14T00:17:20.896366806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rb5w,Uid:04123e15-656e-49c0-a364-59bdb2358313,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\"" Mar 14 00:17:22.133647 kubelet[3206]: E0314 00:17:22.133596 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:22.250335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325991840.mount: Deactivated successfully. Mar 14 00:17:24.133609 kubelet[3206]: E0314 00:17:24.133564 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:24.554540 containerd[1999]: time="2026-03-14T00:17:24.554185276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:24.555899 containerd[1999]: time="2026-03-14T00:17:24.555740426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:17:24.556973 containerd[1999]: time="2026-03-14T00:17:24.556910421Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:24.559483 containerd[1999]: time="2026-03-14T00:17:24.559419623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:24.560403 containerd[1999]: time="2026-03-14T00:17:24.560355557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.728048259s" Mar 14 00:17:24.560600 containerd[1999]: time="2026-03-14T00:17:24.560404305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:17:24.571902 containerd[1999]: time="2026-03-14T00:17:24.570833721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:17:24.589207 containerd[1999]: time="2026-03-14T00:17:24.589164253Z" level=info msg="CreateContainer within sandbox \"4b4d2371bb78b32bc6b6d481584baf82b39c3e1bd1d3a77690b0e1e766f59866\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:17:24.633026 containerd[1999]: time="2026-03-14T00:17:24.632978774Z" level=info msg="CreateContainer within sandbox \"4b4d2371bb78b32bc6b6d481584baf82b39c3e1bd1d3a77690b0e1e766f59866\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4830b19a8aea20939af7e1ed251180f6655761fa481c9452284d5d9f78a9cf52\"" Mar 14 00:17:24.635377 containerd[1999]: time="2026-03-14T00:17:24.633969712Z" level=info msg="StartContainer for \"4830b19a8aea20939af7e1ed251180f6655761fa481c9452284d5d9f78a9cf52\"" Mar 14 00:17:24.711595 systemd[1]: Started cri-containerd-4830b19a8aea20939af7e1ed251180f6655761fa481c9452284d5d9f78a9cf52.scope - libcontainer container 4830b19a8aea20939af7e1ed251180f6655761fa481c9452284d5d9f78a9cf52. Mar 14 00:17:24.761980 containerd[1999]: time="2026-03-14T00:17:24.761932248Z" level=info msg="StartContainer for \"4830b19a8aea20939af7e1ed251180f6655761fa481c9452284d5d9f78a9cf52\" returns successfully" Mar 14 00:17:25.285171 kubelet[3206]: I0314 00:17:25.285075 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-669c66c8d7-nvgl5" podStartSLOduration=1.552486589 podStartE2EDuration="5.285058655s" podCreationTimestamp="2026-03-14 00:17:20 +0000 UTC" firstStartedPulling="2026-03-14 00:17:20.82927269 +0000 UTC m=+21.896695905" lastFinishedPulling="2026-03-14 00:17:24.561844764 +0000 UTC m=+25.629267971" observedRunningTime="2026-03-14 00:17:25.283720159 +0000 UTC m=+26.351143381" watchObservedRunningTime="2026-03-14 00:17:25.285058655 +0000 UTC m=+26.352481877" Mar 14 00:17:25.369670 kubelet[3206]: E0314 00:17:25.369630 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.369670 kubelet[3206]: W0314 00:17:25.369661 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.369937 kubelet[3206]: E0314 00:17:25.369693 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.370027 kubelet[3206]: E0314 00:17:25.370002 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.370027 kubelet[3206]: W0314 00:17:25.370019 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.370163 kubelet[3206]: E0314 00:17:25.370036 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.370289 kubelet[3206]: E0314 00:17:25.370269 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.370289 kubelet[3206]: W0314 00:17:25.370285 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.370431 kubelet[3206]: E0314 00:17:25.370299 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.370718 kubelet[3206]: E0314 00:17:25.370695 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.370718 kubelet[3206]: W0314 00:17:25.370711 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.370728 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.370960 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.402491 kubelet[3206]: W0314 00:17:25.370968 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.370978 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.371186 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.402491 kubelet[3206]: W0314 00:17:25.371194 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.371202 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.371440 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.402491 kubelet[3206]: W0314 00:17:25.371453 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402491 kubelet[3206]: E0314 00:17:25.371464 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.371696 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.402778 kubelet[3206]: W0314 00:17:25.371705 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.371715 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.371951 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.402778 kubelet[3206]: W0314 00:17:25.371959 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.371968 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.372273 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.402778 kubelet[3206]: W0314 00:17:25.372282 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.372292 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.402778 kubelet[3206]: E0314 00:17:25.372523 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403185 kubelet[3206]: W0314 00:17:25.372533 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403185 kubelet[3206]: E0314 00:17:25.372555 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403185 kubelet[3206]: E0314 00:17:25.372807 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403185 kubelet[3206]: W0314 00:17:25.372816 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403185 kubelet[3206]: E0314 00:17:25.372825 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403185 kubelet[3206]: E0314 00:17:25.373032 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403185 kubelet[3206]: W0314 00:17:25.373050 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403185 kubelet[3206]: E0314 00:17:25.373063 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403185 kubelet[3206]: E0314 00:17:25.373255 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403185 kubelet[3206]: W0314 00:17:25.373264 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.373274 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.373500 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403470 kubelet[3206]: W0314 00:17:25.373508 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.373516 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.394066 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403470 kubelet[3206]: W0314 00:17:25.394087 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.394113 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.394408 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403470 kubelet[3206]: W0314 00:17:25.394418 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403470 kubelet[3206]: E0314 00:17:25.394429 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.394718 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403733 kubelet[3206]: W0314 00:17:25.394740 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.394752 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.395037 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403733 kubelet[3206]: W0314 00:17:25.395049 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.395059 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.395340 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403733 kubelet[3206]: W0314 00:17:25.395349 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.395358 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403733 kubelet[3206]: E0314 00:17:25.395570 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403985 kubelet[3206]: W0314 00:17:25.395579 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403985 kubelet[3206]: E0314 00:17:25.395590 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403985 kubelet[3206]: E0314 00:17:25.395831 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403985 kubelet[3206]: W0314 00:17:25.395839 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403985 kubelet[3206]: E0314 00:17:25.395848 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403985 kubelet[3206]: E0314 00:17:25.396089 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403985 kubelet[3206]: W0314 00:17:25.396101 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.403985 kubelet[3206]: E0314 00:17:25.396116 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.403985 kubelet[3206]: E0314 00:17:25.396598 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.403985 kubelet[3206]: W0314 00:17:25.396608 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.396618 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.396868 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404435 kubelet[3206]: W0314 00:17:25.396877 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.396888 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.397087 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404435 kubelet[3206]: W0314 00:17:25.397096 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.397105 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.397356 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404435 kubelet[3206]: W0314 00:17:25.397365 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404435 kubelet[3206]: E0314 00:17:25.397373 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.397689 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404688 kubelet[3206]: W0314 00:17:25.397698 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.397707 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.397917 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404688 kubelet[3206]: W0314 00:17:25.397926 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.397934 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.398133 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404688 kubelet[3206]: W0314 00:17:25.398141 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.398151 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404688 kubelet[3206]: E0314 00:17:25.398390 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404945 kubelet[3206]: W0314 00:17:25.398399 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404945 kubelet[3206]: E0314 00:17:25.398407 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404945 kubelet[3206]: E0314 00:17:25.398694 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404945 kubelet[3206]: W0314 00:17:25.398702 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404945 kubelet[3206]: E0314 00:17:25.398711 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.404945 kubelet[3206]: E0314 00:17:25.398920 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:17:25.404945 kubelet[3206]: W0314 00:17:25.398929 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:17:25.404945 kubelet[3206]: E0314 00:17:25.398937 3206 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:17:25.940653 containerd[1999]: time="2026-03-14T00:17:25.940576629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:25.941910 containerd[1999]: time="2026-03-14T00:17:25.941776942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:17:25.944348 containerd[1999]: time="2026-03-14T00:17:25.942875183Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:25.945442 containerd[1999]: time="2026-03-14T00:17:25.945403931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:25.946234 containerd[1999]: time="2026-03-14T00:17:25.946197401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.37530388s" Mar 14 00:17:25.946384 containerd[1999]: time="2026-03-14T00:17:25.946362712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:17:25.951660 containerd[1999]: time="2026-03-14T00:17:25.951619665Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:17:25.977221 containerd[1999]: time="2026-03-14T00:17:25.977175687Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85\"" Mar 14 00:17:25.978125 containerd[1999]: time="2026-03-14T00:17:25.978083543Z" level=info msg="StartContainer for \"057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85\"" Mar 14 00:17:26.027254 systemd[1]: Started cri-containerd-057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85.scope - libcontainer container 057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85. Mar 14 00:17:26.063150 containerd[1999]: time="2026-03-14T00:17:26.063102212Z" level=info msg="StartContainer for \"057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85\" returns successfully" Mar 14 00:17:26.077038 systemd[1]: cri-containerd-057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85.scope: Deactivated successfully. Mar 14 00:17:26.134132 kubelet[3206]: E0314 00:17:26.134083 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:26.170217 containerd[1999]: time="2026-03-14T00:17:26.155976532Z" level=info msg="shim disconnected" id=057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85 namespace=k8s.io Mar 14 00:17:26.170217 containerd[1999]: time="2026-03-14T00:17:26.170209600Z" level=warning msg="cleaning up after shim disconnected" id=057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85 namespace=k8s.io Mar 14 00:17:26.170577 containerd[1999]: time="2026-03-14T00:17:26.170229713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:26.274979 kubelet[3206]: I0314 00:17:26.274862 3206 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:17:26.277648 containerd[1999]: time="2026-03-14T00:17:26.277609763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:17:26.574787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-057195936d29baca31cda269433609da25e0d3da05e41aa5547f6348eb4b7b85-rootfs.mount: Deactivated successfully. Mar 14 00:17:28.133207 kubelet[3206]: E0314 00:17:28.133155 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:30.133719 kubelet[3206]: E0314 00:17:30.133658 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:32.133984 kubelet[3206]: E0314 00:17:32.133932 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:34.133767 kubelet[3206]: E0314 00:17:34.133714 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:36.133709 kubelet[3206]: E0314 00:17:36.133649 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:37.490363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448639805.mount: Deactivated successfully. Mar 14 00:17:37.529366 containerd[1999]: time="2026-03-14T00:17:37.522229024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:37.529820 containerd[1999]: time="2026-03-14T00:17:37.525995082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:17:37.558229 containerd[1999]: time="2026-03-14T00:17:37.558178699Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:37.571264 containerd[1999]: time="2026-03-14T00:17:37.570815213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:37.571820 containerd[1999]: time="2026-03-14T00:17:37.571782057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.294130096s" Mar 14 00:17:37.571960 containerd[1999]: time="2026-03-14T00:17:37.571939828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:17:37.675878 containerd[1999]: time="2026-03-14T00:17:37.675821651Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:17:37.770217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755103329.mount: Deactivated successfully. Mar 14 00:17:37.774063 containerd[1999]: time="2026-03-14T00:17:37.774019725Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11\"" Mar 14 00:17:37.775035 containerd[1999]: time="2026-03-14T00:17:37.774982150Z" level=info msg="StartContainer for \"581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11\"" Mar 14 00:17:37.986654 systemd[1]: Started cri-containerd-581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11.scope - libcontainer container 581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11. Mar 14 00:17:38.040184 containerd[1999]: time="2026-03-14T00:17:38.039971478Z" level=info msg="StartContainer for \"581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11\" returns successfully" Mar 14 00:17:38.099224 systemd[1]: cri-containerd-581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11.scope: Deactivated successfully. Mar 14 00:17:38.133696 kubelet[3206]: E0314 00:17:38.133655 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:38.182841 containerd[1999]: time="2026-03-14T00:17:38.182171791Z" level=info msg="shim disconnected" id=581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11 namespace=k8s.io Mar 14 00:17:38.182841 containerd[1999]: time="2026-03-14T00:17:38.182260380Z" level=warning msg="cleaning up after shim disconnected" id=581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11 namespace=k8s.io Mar 14 00:17:38.182841 containerd[1999]: time="2026-03-14T00:17:38.182273184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:38.334574 containerd[1999]: time="2026-03-14T00:17:38.334525028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:17:38.489014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-581b00726fcf2b6a7e2ca135f142b325a935f9d6c7d8322ab0d49a9e72186f11-rootfs.mount: Deactivated successfully. Mar 14 00:17:40.133128 kubelet[3206]: E0314 00:17:40.133082 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:42.133652 kubelet[3206]: E0314 00:17:42.133228 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:42.492558 containerd[1999]: time="2026-03-14T00:17:42.492267744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:42.493629 containerd[1999]: time="2026-03-14T00:17:42.493444802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:17:42.494410 containerd[1999]: time="2026-03-14T00:17:42.494347064Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:42.496858 containerd[1999]: time="2026-03-14T00:17:42.496798070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:42.497863 containerd[1999]: time="2026-03-14T00:17:42.497829084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.163068773s" Mar 14 00:17:42.497992 containerd[1999]: time="2026-03-14T00:17:42.497970831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:17:42.504464 containerd[1999]: time="2026-03-14T00:17:42.504389470Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:17:42.521786 containerd[1999]: time="2026-03-14T00:17:42.521735175Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d\"" Mar 14 00:17:42.523764 containerd[1999]: time="2026-03-14T00:17:42.523726998Z" level=info msg="StartContainer for \"c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d\"" Mar 14 00:17:42.565571 systemd[1]: Started cri-containerd-c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d.scope - libcontainer container c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d. Mar 14 00:17:42.616138 containerd[1999]: time="2026-03-14T00:17:42.614739416Z" level=info msg="StartContainer for \"c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d\" returns successfully" Mar 14 00:17:44.133881 kubelet[3206]: E0314 00:17:44.133494 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:44.174132 systemd[1]: cri-containerd-c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d.scope: Deactivated successfully. Mar 14 00:17:44.211772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d-rootfs.mount: Deactivated successfully. Mar 14 00:17:44.245679 containerd[1999]: time="2026-03-14T00:17:44.219858034Z" level=info msg="shim disconnected" id=c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d namespace=k8s.io Mar 14 00:17:44.245679 containerd[1999]: time="2026-03-14T00:17:44.219923967Z" level=warning msg="cleaning up after shim disconnected" id=c42f8ee32275262c0d721936dfec40e5f5bb8d6f3303732f0d168b2741353b0d namespace=k8s.io Mar 14 00:17:44.245679 containerd[1999]: time="2026-03-14T00:17:44.219935275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:44.266186 kubelet[3206]: I0314 00:17:44.248910 3206 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:17:44.394974 containerd[1999]: time="2026-03-14T00:17:44.394286591Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:17:44.426221 containerd[1999]: time="2026-03-14T00:17:44.425948532Z" level=info msg="CreateContainer within sandbox \"e2e29d0f6fe5a9b84d139e5de317979e59d864a5053fac9919274b0b830c2d40\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4bf66a4383a82f0f82145c797320c39f988738dc18b832d5ac3eda9388331e69\"" Mar 14 00:17:44.428371 containerd[1999]: time="2026-03-14T00:17:44.426748974Z" level=info msg="StartContainer for \"4bf66a4383a82f0f82145c797320c39f988738dc18b832d5ac3eda9388331e69\"" Mar 14 00:17:44.459551 systemd[1]: Started cri-containerd-4bf66a4383a82f0f82145c797320c39f988738dc18b832d5ac3eda9388331e69.scope - libcontainer container 4bf66a4383a82f0f82145c797320c39f988738dc18b832d5ac3eda9388331e69. Mar 14 00:17:44.564547 containerd[1999]: time="2026-03-14T00:17:44.564364258Z" level=info msg="StartContainer for \"4bf66a4383a82f0f82145c797320c39f988738dc18b832d5ac3eda9388331e69\" returns successfully" Mar 14 00:17:44.619818 systemd[1]: Created slice kubepods-burstable-pode85a1e0f_e7ad_4031_9d89_5b7c43cae302.slice - libcontainer container kubepods-burstable-pode85a1e0f_e7ad_4031_9d89_5b7c43cae302.slice. Mar 14 00:17:44.637493 systemd[1]: Created slice kubepods-burstable-podd60e0a0f_100e_4b16_8d9e_155aae6fac41.slice - libcontainer container kubepods-burstable-podd60e0a0f_100e_4b16_8d9e_155aae6fac41.slice. Mar 14 00:17:44.648230 systemd[1]: Created slice kubepods-besteffort-pod9b3ad80f_d1e3_4c5e_9df2_fb2b5357e120.slice - libcontainer container kubepods-besteffort-pod9b3ad80f_d1e3_4c5e_9df2_fb2b5357e120.slice. Mar 14 00:17:44.661859 systemd[1]: Created slice kubepods-besteffort-pod254c54ac_8310_4004_8ce4_5125625b2db5.slice - libcontainer container kubepods-besteffort-pod254c54ac_8310_4004_8ce4_5125625b2db5.slice. Mar 14 00:17:44.671598 systemd[1]: Created slice kubepods-besteffort-pod9df0cfda_870d_4b2a_931c_ff6a051f6b62.slice - libcontainer container kubepods-besteffort-pod9df0cfda_870d_4b2a_931c_ff6a051f6b62.slice. Mar 14 00:17:44.690094 systemd[1]: Created slice kubepods-besteffort-pod77dd6a86_ca74_4e28_b372_e452edb98c28.slice - libcontainer container kubepods-besteffort-pod77dd6a86_ca74_4e28_b372_e452edb98c28.slice. Mar 14 00:17:44.690936 kubelet[3206]: I0314 00:17:44.690905 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77dd6a86-ca74-4e28-b372-e452edb98c28-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-b6wbf\" (UID: \"77dd6a86-ca74-4e28-b372-e452edb98c28\") " pod="calico-system/goldmane-9f7667bb8-b6wbf" Mar 14 00:17:44.691046 kubelet[3206]: I0314 00:17:44.690961 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/77dd6a86-ca74-4e28-b372-e452edb98c28-goldmane-key-pair\") pod \"goldmane-9f7667bb8-b6wbf\" (UID: \"77dd6a86-ca74-4e28-b372-e452edb98c28\") " pod="calico-system/goldmane-9f7667bb8-b6wbf" Mar 14 00:17:44.691046 kubelet[3206]: I0314 00:17:44.691017 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqt4\" (UniqueName: \"kubernetes.io/projected/77dd6a86-ca74-4e28-b372-e452edb98c28-kube-api-access-nnqt4\") pod \"goldmane-9f7667bb8-b6wbf\" (UID: \"77dd6a86-ca74-4e28-b372-e452edb98c28\") " pod="calico-system/goldmane-9f7667bb8-b6wbf" Mar 14 00:17:44.691148 kubelet[3206]: I0314 00:17:44.691079 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120-tigera-ca-bundle\") pod \"calico-kube-controllers-7b6c7c5dc8-27jg8\" (UID: \"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120\") " pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" Mar 14 00:17:44.691200 kubelet[3206]: I0314 00:17:44.691112 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/254c54ac-8310-4004-8ce4-5125625b2db5-calico-apiserver-certs\") pod \"calico-apiserver-6b7865fd67-f9cpm\" (UID: \"254c54ac-8310-4004-8ce4-5125625b2db5\") " pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" Mar 14 00:17:44.691200 kubelet[3206]: I0314 00:17:44.691176 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9sdf\" (UniqueName: \"kubernetes.io/projected/254c54ac-8310-4004-8ce4-5125625b2db5-kube-api-access-k9sdf\") pod \"calico-apiserver-6b7865fd67-f9cpm\" (UID: \"254c54ac-8310-4004-8ce4-5125625b2db5\") " pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" Mar 14 00:17:44.691297 kubelet[3206]: I0314 00:17:44.691201 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-backend-key-pair\") pod \"whisker-579d8fdf8b-qdqw8\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " pod="calico-system/whisker-579d8fdf8b-qdqw8" Mar 14 00:17:44.691359 kubelet[3206]: I0314 00:17:44.691295 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-ca-bundle\") pod \"whisker-579d8fdf8b-qdqw8\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " pod="calico-system/whisker-579d8fdf8b-qdqw8" Mar 14 00:17:44.691412 kubelet[3206]: I0314 00:17:44.691353 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-nginx-config\") pod \"whisker-579d8fdf8b-qdqw8\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " pod="calico-system/whisker-579d8fdf8b-qdqw8" Mar 14 00:17:44.691412 kubelet[3206]: I0314 00:17:44.691387 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e85a1e0f-e7ad-4031-9d89-5b7c43cae302-config-volume\") pod \"coredns-7d764666f9-b5j2r\" (UID: \"e85a1e0f-e7ad-4031-9d89-5b7c43cae302\") " pod="kube-system/coredns-7d764666f9-b5j2r" Mar 14 00:17:44.691513 kubelet[3206]: I0314 00:17:44.691436 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl4zs\" (UniqueName: \"kubernetes.io/projected/e85a1e0f-e7ad-4031-9d89-5b7c43cae302-kube-api-access-fl4zs\") pod \"coredns-7d764666f9-b5j2r\" (UID: \"e85a1e0f-e7ad-4031-9d89-5b7c43cae302\") " pod="kube-system/coredns-7d764666f9-b5j2r" Mar 14 00:17:44.691513 kubelet[3206]: I0314 00:17:44.691466 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs52p\" (UniqueName: \"kubernetes.io/projected/d60e0a0f-100e-4b16-8d9e-155aae6fac41-kube-api-access-gs52p\") pod \"coredns-7d764666f9-gmvx8\" (UID: \"d60e0a0f-100e-4b16-8d9e-155aae6fac41\") " pod="kube-system/coredns-7d764666f9-gmvx8" Mar 14 00:17:44.691606 kubelet[3206]: I0314 00:17:44.691518 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjmx\" (UniqueName: \"kubernetes.io/projected/9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120-kube-api-access-fzjmx\") pod \"calico-kube-controllers-7b6c7c5dc8-27jg8\" (UID: \"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120\") " pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" Mar 14 00:17:44.691606 kubelet[3206]: I0314 00:17:44.691543 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77dd6a86-ca74-4e28-b372-e452edb98c28-config\") pod \"goldmane-9f7667bb8-b6wbf\" (UID: \"77dd6a86-ca74-4e28-b372-e452edb98c28\") " pod="calico-system/goldmane-9f7667bb8-b6wbf" Mar 14 00:17:44.691699 kubelet[3206]: I0314 00:17:44.691598 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60e0a0f-100e-4b16-8d9e-155aae6fac41-config-volume\") pod \"coredns-7d764666f9-gmvx8\" (UID: \"d60e0a0f-100e-4b16-8d9e-155aae6fac41\") " pod="kube-system/coredns-7d764666f9-gmvx8" Mar 14 00:17:44.691699 kubelet[3206]: I0314 00:17:44.691633 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8cc9\" (UniqueName: \"kubernetes.io/projected/9df0cfda-870d-4b2a-931c-ff6a051f6b62-kube-api-access-v8cc9\") pod \"whisker-579d8fdf8b-qdqw8\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " pod="calico-system/whisker-579d8fdf8b-qdqw8" Mar 14 00:17:44.701060 systemd[1]: Created slice kubepods-besteffort-pod83c05f9a_996d_4297_b2d6_b3ceffdc570f.slice - libcontainer container kubepods-besteffort-pod83c05f9a_996d_4297_b2d6_b3ceffdc570f.slice. Mar 14 00:17:44.793046 kubelet[3206]: I0314 00:17:44.792998 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83c05f9a-996d-4297-b2d6-b3ceffdc570f-calico-apiserver-certs\") pod \"calico-apiserver-6b7865fd67-6tjs6\" (UID: \"83c05f9a-996d-4297-b2d6-b3ceffdc570f\") " pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" Mar 14 00:17:44.793295 kubelet[3206]: I0314 00:17:44.793050 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42db\" (UniqueName: \"kubernetes.io/projected/83c05f9a-996d-4297-b2d6-b3ceffdc570f-kube-api-access-l42db\") pod \"calico-apiserver-6b7865fd67-6tjs6\" (UID: \"83c05f9a-996d-4297-b2d6-b3ceffdc570f\") " pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" Mar 14 00:17:44.973713 containerd[1999]: time="2026-03-14T00:17:44.972356699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gmvx8,Uid:d60e0a0f-100e-4b16-8d9e-155aae6fac41,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:44.985076 containerd[1999]: time="2026-03-14T00:17:44.985002895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-f9cpm,Uid:254c54ac-8310-4004-8ce4-5125625b2db5,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:44.993715 containerd[1999]: time="2026-03-14T00:17:44.993223705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579d8fdf8b-qdqw8,Uid:9df0cfda-870d-4b2a-931c-ff6a051f6b62,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:45.004155 containerd[1999]: time="2026-03-14T00:17:45.002604258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b6wbf,Uid:77dd6a86-ca74-4e28-b372-e452edb98c28,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:45.004504 containerd[1999]: time="2026-03-14T00:17:45.004459980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6c7c5dc8-27jg8,Uid:9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:45.004818 containerd[1999]: time="2026-03-14T00:17:45.004784844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b5j2r,Uid:e85a1e0f-e7ad-4031-9d89-5b7c43cae302,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:45.011864 containerd[1999]: time="2026-03-14T00:17:45.011264588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-6tjs6,Uid:83c05f9a-996d-4297-b2d6-b3ceffdc570f,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:45.437393 kubelet[3206]: I0314 00:17:45.424330 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-8rb5w" podStartSLOduration=1.970246504 podStartE2EDuration="25.420918482s" podCreationTimestamp="2026-03-14 00:17:20 +0000 UTC" firstStartedPulling="2026-03-14 00:17:20.899402917 +0000 UTC m=+21.966826331" lastFinishedPulling="2026-03-14 00:17:44.350075096 +0000 UTC m=+45.417498309" observedRunningTime="2026-03-14 00:17:45.415386383 +0000 UTC m=+46.482809604" watchObservedRunningTime="2026-03-14 00:17:45.420918482 +0000 UTC m=+46.488341705" Mar 14 00:17:45.738117 containerd[1999]: time="2026-03-14T00:17:45.737965589Z" level=error msg="Failed to destroy network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.745589 containerd[1999]: time="2026-03-14T00:17:45.745533060Z" level=error msg="Failed to destroy network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.748254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a-shm.mount: Deactivated successfully. Mar 14 00:17:45.753010 containerd[1999]: time="2026-03-14T00:17:45.752630231Z" level=error msg="encountered an error cleaning up failed sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.753010 containerd[1999]: time="2026-03-14T00:17:45.752716898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gmvx8,Uid:d60e0a0f-100e-4b16-8d9e-155aae6fac41,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.760725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66-shm.mount: Deactivated successfully. Mar 14 00:17:45.768195 containerd[1999]: time="2026-03-14T00:17:45.767979612Z" level=error msg="encountered an error cleaning up failed sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.768754 containerd[1999]: time="2026-03-14T00:17:45.768561944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b5j2r,Uid:e85a1e0f-e7ad-4031-9d89-5b7c43cae302,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.781110 containerd[1999]: time="2026-03-14T00:17:45.779421367Z" level=error msg="Failed to destroy network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.784412 containerd[1999]: time="2026-03-14T00:17:45.782646823Z" level=error msg="encountered an error cleaning up failed sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.784412 containerd[1999]: time="2026-03-14T00:17:45.782731806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-f9cpm,Uid:254c54ac-8310-4004-8ce4-5125625b2db5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.790229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0-shm.mount: Deactivated successfully. Mar 14 00:17:45.795915 containerd[1999]: time="2026-03-14T00:17:45.795868115Z" level=error msg="Failed to destroy network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.796837 containerd[1999]: time="2026-03-14T00:17:45.796797010Z" level=error msg="encountered an error cleaning up failed sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.797149 containerd[1999]: time="2026-03-14T00:17:45.796987491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-6tjs6,Uid:83c05f9a-996d-4297-b2d6-b3ceffdc570f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.810344 kubelet[3206]: E0314 00:17:45.810039 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.810674 kubelet[3206]: E0314 00:17:45.810559 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.812363 kubelet[3206]: E0314 00:17:45.811755 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gmvx8" Mar 14 00:17:45.813516 kubelet[3206]: E0314 00:17:45.813194 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gmvx8" Mar 14 00:17:45.813516 kubelet[3206]: E0314 00:17:45.813292 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-gmvx8_kube-system(d60e0a0f-100e-4b16-8d9e-155aae6fac41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-gmvx8_kube-system(d60e0a0f-100e-4b16-8d9e-155aae6fac41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gmvx8" podUID="d60e0a0f-100e-4b16-8d9e-155aae6fac41" Mar 14 00:17:45.824177 containerd[1999]: time="2026-03-14T00:17:45.824119775Z" level=error msg="Failed to destroy network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.824566 containerd[1999]: time="2026-03-14T00:17:45.824522438Z" level=error msg="encountered an error cleaning up failed sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.824678 containerd[1999]: time="2026-03-14T00:17:45.824646923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b6wbf,Uid:77dd6a86-ca74-4e28-b372-e452edb98c28,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.829503 containerd[1999]: time="2026-03-14T00:17:45.829455014Z" level=error msg="Failed to destroy network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.832160 containerd[1999]: time="2026-03-14T00:17:45.830586443Z" level=error msg="encountered an error cleaning up failed sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.832160 containerd[1999]: time="2026-03-14T00:17:45.830668824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6c7c5dc8-27jg8,Uid:9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.846162 containerd[1999]: time="2026-03-14T00:17:45.845314480Z" level=error msg="Failed to destroy network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.846883 containerd[1999]: time="2026-03-14T00:17:45.845777062Z" level=error msg="encountered an error cleaning up failed sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.846998 containerd[1999]: time="2026-03-14T00:17:45.846912288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579d8fdf8b-qdqw8,Uid:9df0cfda-870d-4b2a-931c-ff6a051f6b62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.852350 kubelet[3206]: E0314 00:17:45.851531 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.852350 kubelet[3206]: E0314 00:17:45.851601 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-b5j2r" Mar 14 00:17:45.852350 kubelet[3206]: E0314 00:17:45.851626 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-b5j2r" Mar 14 00:17:45.852574 kubelet[3206]: E0314 00:17:45.851706 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-b5j2r_kube-system(e85a1e0f-e7ad-4031-9d89-5b7c43cae302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-b5j2r_kube-system(e85a1e0f-e7ad-4031-9d89-5b7c43cae302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-b5j2r" podUID="e85a1e0f-e7ad-4031-9d89-5b7c43cae302" Mar 14 00:17:45.852574 kubelet[3206]: E0314 00:17:45.851768 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.852574 kubelet[3206]: E0314 00:17:45.851792 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" Mar 14 00:17:45.853657 kubelet[3206]: E0314 00:17:45.851815 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" Mar 14 00:17:45.853657 kubelet[3206]: E0314 00:17:45.851853 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7865fd67-f9cpm_calico-system(254c54ac-8310-4004-8ce4-5125625b2db5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7865fd67-f9cpm_calico-system(254c54ac-8310-4004-8ce4-5125625b2db5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" podUID="254c54ac-8310-4004-8ce4-5125625b2db5" Mar 14 00:17:45.853657 kubelet[3206]: E0314 00:17:45.852830 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.853846 kubelet[3206]: E0314 00:17:45.852882 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-579d8fdf8b-qdqw8" Mar 14 00:17:45.853846 kubelet[3206]: E0314 00:17:45.852901 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-579d8fdf8b-qdqw8" Mar 14 00:17:45.853846 kubelet[3206]: E0314 00:17:45.852950 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-579d8fdf8b-qdqw8_calico-system(9df0cfda-870d-4b2a-931c-ff6a051f6b62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-579d8fdf8b-qdqw8_calico-system(9df0cfda-870d-4b2a-931c-ff6a051f6b62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-579d8fdf8b-qdqw8" podUID="9df0cfda-870d-4b2a-931c-ff6a051f6b62" Mar 14 00:17:45.854053 kubelet[3206]: E0314 00:17:45.853073 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.854053 kubelet[3206]: E0314 00:17:45.853100 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-b6wbf" Mar 14 00:17:45.854053 kubelet[3206]: E0314 00:17:45.853118 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-b6wbf" Mar 14 00:17:45.854222 kubelet[3206]: E0314 00:17:45.853169 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-b6wbf_calico-system(77dd6a86-ca74-4e28-b372-e452edb98c28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-b6wbf_calico-system(77dd6a86-ca74-4e28-b372-e452edb98c28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-b6wbf" podUID="77dd6a86-ca74-4e28-b372-e452edb98c28" Mar 14 00:17:45.854222 kubelet[3206]: E0314 00:17:45.853226 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:45.854222 kubelet[3206]: E0314 00:17:45.853249 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" Mar 14 00:17:45.854438 kubelet[3206]: E0314 00:17:45.853268 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" Mar 14 00:17:45.854438 kubelet[3206]: E0314 00:17:45.853302 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b6c7c5dc8-27jg8_calico-system(9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b6c7c5dc8-27jg8_calico-system(9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" podUID="9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120" Mar 14 00:17:45.854438 kubelet[3206]: E0314 00:17:45.816496 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" Mar 14 00:17:45.854612 kubelet[3206]: E0314 00:17:45.853371 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" Mar 14 00:17:45.854612 kubelet[3206]: E0314 00:17:45.853422 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7865fd67-6tjs6_calico-system(83c05f9a-996d-4297-b2d6-b3ceffdc570f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7865fd67-6tjs6_calico-system(83c05f9a-996d-4297-b2d6-b3ceffdc570f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" podUID="83c05f9a-996d-4297-b2d6-b3ceffdc570f" Mar 14 00:17:46.138658 systemd[1]: Created slice kubepods-besteffort-pod69ad5735_6879_4268_98e4_4112168b29a6.slice - libcontainer container kubepods-besteffort-pod69ad5735_6879_4268_98e4_4112168b29a6.slice. Mar 14 00:17:46.143561 containerd[1999]: time="2026-03-14T00:17:46.143517477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwsn2,Uid:69ad5735-6879-4268-98e4-4112168b29a6,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:46.208231 containerd[1999]: time="2026-03-14T00:17:46.208176243Z" level=error msg="Failed to destroy network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.209019 containerd[1999]: time="2026-03-14T00:17:46.208827072Z" level=error msg="encountered an error cleaning up failed sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.209019 containerd[1999]: time="2026-03-14T00:17:46.208899008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwsn2,Uid:69ad5735-6879-4268-98e4-4112168b29a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.209225 kubelet[3206]: E0314 00:17:46.209149 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.209225 kubelet[3206]: E0314 00:17:46.209209 3206 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:46.209391 kubelet[3206]: E0314 00:17:46.209242 3206 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwsn2" Mar 14 00:17:46.209391 kubelet[3206]: E0314 00:17:46.209308 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rwsn2_calico-system(69ad5735-6879-4268-98e4-4112168b29a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rwsn2_calico-system(69ad5735-6879-4268-98e4-4112168b29a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:46.215084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4-shm.mount: Deactivated successfully. Mar 14 00:17:46.215210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6-shm.mount: Deactivated successfully. Mar 14 00:17:46.215297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae-shm.mount: Deactivated successfully. Mar 14 00:17:46.215464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8-shm.mount: Deactivated successfully. Mar 14 00:17:46.391546 containerd[1999]: time="2026-03-14T00:17:46.389547994Z" level=info msg="StopPodSandbox for \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\"" Mar 14 00:17:46.391693 kubelet[3206]: I0314 00:17:46.391582 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Mar 14 00:17:46.392730 containerd[1999]: time="2026-03-14T00:17:46.392694326Z" level=info msg="Ensure that sandbox 2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4 in task-service has been cleanup successfully" Mar 14 00:17:46.393274 kubelet[3206]: I0314 00:17:46.393247 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Mar 14 00:17:46.394925 containerd[1999]: time="2026-03-14T00:17:46.394893695Z" level=info msg="StopPodSandbox for \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\"" Mar 14 00:17:46.396637 containerd[1999]: time="2026-03-14T00:17:46.396590661Z" level=info msg="Ensure that sandbox 7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a in task-service has been cleanup successfully" Mar 14 00:17:46.396983 kubelet[3206]: I0314 00:17:46.396893 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:17:46.397818 containerd[1999]: time="2026-03-14T00:17:46.397711202Z" level=info msg="StopPodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\"" Mar 14 00:17:46.400575 containerd[1999]: time="2026-03-14T00:17:46.398819716Z" level=info msg="Ensure that sandbox 6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6 in task-service has been cleanup successfully" Mar 14 00:17:46.401480 kubelet[3206]: I0314 00:17:46.401455 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Mar 14 00:17:46.403438 containerd[1999]: time="2026-03-14T00:17:46.403406637Z" level=info msg="StopPodSandbox for \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\"" Mar 14 00:17:46.404769 containerd[1999]: time="2026-03-14T00:17:46.404359384Z" level=info msg="Ensure that sandbox d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0 in task-service has been cleanup successfully" Mar 14 00:17:46.416565 kubelet[3206]: I0314 00:17:46.416537 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Mar 14 00:17:46.421416 containerd[1999]: time="2026-03-14T00:17:46.421377459Z" level=info msg="StopPodSandbox for \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\"" Mar 14 00:17:46.421618 containerd[1999]: time="2026-03-14T00:17:46.421594405Z" level=info msg="Ensure that sandbox 165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66 in task-service has been cleanup successfully" Mar 14 00:17:46.426542 kubelet[3206]: I0314 00:17:46.425937 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:46.429190 containerd[1999]: time="2026-03-14T00:17:46.428968341Z" level=info msg="StopPodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\"" Mar 14 00:17:46.430216 containerd[1999]: time="2026-03-14T00:17:46.430176673Z" level=info msg="Ensure that sandbox 5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae in task-service has been cleanup successfully" Mar 14 00:17:46.438028 kubelet[3206]: I0314 00:17:46.437316 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Mar 14 00:17:46.439742 containerd[1999]: time="2026-03-14T00:17:46.439476989Z" level=info msg="StopPodSandbox for \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\"" Mar 14 00:17:46.440468 containerd[1999]: time="2026-03-14T00:17:46.439947533Z" level=info msg="Ensure that sandbox 5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1 in task-service has been cleanup successfully" Mar 14 00:17:46.448476 kubelet[3206]: I0314 00:17:46.447950 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Mar 14 00:17:46.449567 containerd[1999]: time="2026-03-14T00:17:46.449525288Z" level=info msg="StopPodSandbox for \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\"" Mar 14 00:17:46.449783 containerd[1999]: time="2026-03-14T00:17:46.449757862Z" level=info msg="Ensure that sandbox fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8 in task-service has been cleanup successfully" Mar 14 00:17:46.569825 containerd[1999]: time="2026-03-14T00:17:46.569770053Z" level=error msg="StopPodSandbox for \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\" failed" error="failed to destroy network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.573846 kubelet[3206]: E0314 00:17:46.573717 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Mar 14 00:17:46.573846 kubelet[3206]: E0314 00:17:46.573810 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4"} Mar 14 00:17:46.574187 kubelet[3206]: E0314 00:17:46.573880 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83c05f9a-996d-4297-b2d6-b3ceffdc570f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.574187 kubelet[3206]: E0314 00:17:46.573915 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83c05f9a-996d-4297-b2d6-b3ceffdc570f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" podUID="83c05f9a-996d-4297-b2d6-b3ceffdc570f" Mar 14 00:17:46.595434 containerd[1999]: time="2026-03-14T00:17:46.595377714Z" level=error msg="StopPodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" failed" error="failed to destroy network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.595954 kubelet[3206]: E0314 00:17:46.595739 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:17:46.595954 kubelet[3206]: E0314 00:17:46.595793 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6"} Mar 14 00:17:46.595954 kubelet[3206]: E0314 00:17:46.595833 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.595954 kubelet[3206]: E0314 00:17:46.595882 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" podUID="9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120" Mar 14 00:17:46.602069 containerd[1999]: time="2026-03-14T00:17:46.601922662Z" level=error msg="StopPodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" failed" error="failed to destroy network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.602722 kubelet[3206]: E0314 00:17:46.602494 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:46.602722 kubelet[3206]: E0314 00:17:46.602562 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae"} Mar 14 00:17:46.602722 kubelet[3206]: E0314 00:17:46.602609 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.602722 kubelet[3206]: E0314 00:17:46.602659 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-579d8fdf8b-qdqw8" podUID="9df0cfda-870d-4b2a-931c-ff6a051f6b62" Mar 14 00:17:46.604910 containerd[1999]: time="2026-03-14T00:17:46.604862404Z" level=error msg="StopPodSandbox for \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\" failed" error="failed to destroy network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.605558 kubelet[3206]: E0314 00:17:46.605275 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Mar 14 00:17:46.605558 kubelet[3206]: E0314 00:17:46.605401 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66"} Mar 14 00:17:46.605558 kubelet[3206]: E0314 00:17:46.605460 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d60e0a0f-100e-4b16-8d9e-155aae6fac41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.605558 kubelet[3206]: E0314 00:17:46.605497 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d60e0a0f-100e-4b16-8d9e-155aae6fac41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gmvx8" podUID="d60e0a0f-100e-4b16-8d9e-155aae6fac41" Mar 14 00:17:46.621727 containerd[1999]: time="2026-03-14T00:17:46.621668685Z" level=error msg="StopPodSandbox for \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\" failed" error="failed to destroy network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.622201 kubelet[3206]: E0314 00:17:46.622149 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Mar 14 00:17:46.622310 kubelet[3206]: E0314 00:17:46.622216 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0"} Mar 14 00:17:46.622310 kubelet[3206]: E0314 00:17:46.622257 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"254c54ac-8310-4004-8ce4-5125625b2db5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.622310 kubelet[3206]: E0314 00:17:46.622293 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"254c54ac-8310-4004-8ce4-5125625b2db5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" podUID="254c54ac-8310-4004-8ce4-5125625b2db5" Mar 14 00:17:46.622873 containerd[1999]: time="2026-03-14T00:17:46.622743536Z" level=error msg="StopPodSandbox for \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\" failed" error="failed to destroy network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.623274 kubelet[3206]: E0314 00:17:46.623199 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Mar 14 00:17:46.623274 kubelet[3206]: E0314 00:17:46.623245 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8"} Mar 14 00:17:46.623429 kubelet[3206]: E0314 00:17:46.623296 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77dd6a86-ca74-4e28-b372-e452edb98c28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.623746 kubelet[3206]: E0314 00:17:46.623592 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77dd6a86-ca74-4e28-b372-e452edb98c28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-b6wbf" podUID="77dd6a86-ca74-4e28-b372-e452edb98c28" Mar 14 00:17:46.623988 containerd[1999]: time="2026-03-14T00:17:46.623897248Z" level=error msg="StopPodSandbox for \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\" failed" error="failed to destroy network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.624501 kubelet[3206]: E0314 00:17:46.624341 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Mar 14 00:17:46.624501 kubelet[3206]: E0314 00:17:46.624394 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a"} Mar 14 00:17:46.624501 kubelet[3206]: E0314 00:17:46.624428 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e85a1e0f-e7ad-4031-9d89-5b7c43cae302\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.624501 kubelet[3206]: E0314 00:17:46.624461 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e85a1e0f-e7ad-4031-9d89-5b7c43cae302\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-b5j2r" podUID="e85a1e0f-e7ad-4031-9d89-5b7c43cae302" Mar 14 00:17:46.629080 containerd[1999]: time="2026-03-14T00:17:46.629027647Z" level=error msg="StopPodSandbox for \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\" failed" error="failed to destroy network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:17:46.629412 kubelet[3206]: E0314 00:17:46.629366 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Mar 14 00:17:46.629509 kubelet[3206]: E0314 00:17:46.629439 3206 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1"} Mar 14 00:17:46.629509 kubelet[3206]: E0314 00:17:46.629479 3206 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69ad5735-6879-4268-98e4-4112168b29a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:17:46.629628 kubelet[3206]: E0314 00:17:46.629533 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69ad5735-6879-4268-98e4-4112168b29a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwsn2" podUID="69ad5735-6879-4268-98e4-4112168b29a6" Mar 14 00:17:47.449764 kubelet[3206]: I0314 00:17:47.449731 3206 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:17:49.560872 kubelet[3206]: I0314 00:17:49.560821 3206 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:17:50.103860 systemd[1]: run-containerd-runc-k8s.io-4bf66a4383a82f0f82145c797320c39f988738dc18b832d5ac3eda9388331e69-runc.roqxGZ.mount: Deactivated successfully. Mar 14 00:17:50.676180 containerd[1999]: time="2026-03-14T00:17:50.675830141Z" level=info msg="StopPodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\"" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.840 [INFO][4785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.840 [INFO][4785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" iface="eth0" netns="/var/run/netns/cni-d4089eaa-7212-ed0a-de37-3f207d66eacd" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.841 [INFO][4785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" iface="eth0" netns="/var/run/netns/cni-d4089eaa-7212-ed0a-de37-3f207d66eacd" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.842 [INFO][4785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" iface="eth0" netns="/var/run/netns/cni-d4089eaa-7212-ed0a-de37-3f207d66eacd" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.842 [INFO][4785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.842 [INFO][4785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.899 [INFO][4794] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.899 [INFO][4794] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.899 [INFO][4794] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.911 [WARNING][4794] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.911 [INFO][4794] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.913 [INFO][4794] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:50.921221 containerd[1999]: 2026-03-14 00:17:50.918 [INFO][4785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:50.923921 containerd[1999]: time="2026-03-14T00:17:50.921393754Z" level=info msg="TearDown network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" successfully" Mar 14 00:17:50.923921 containerd[1999]: time="2026-03-14T00:17:50.921431356Z" level=info msg="StopPodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" returns successfully" Mar 14 00:17:50.932716 systemd[1]: run-netns-cni\x2dd4089eaa\x2d7212\x2ded0a\x2dde37\x2d3f207d66eacd.mount: Deactivated successfully. Mar 14 00:17:51.053124 kubelet[3206]: I0314 00:17:51.052686 3206 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-ca-bundle\") pod \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " Mar 14 00:17:51.053124 kubelet[3206]: I0314 00:17:51.052765 3206 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9df0cfda-870d-4b2a-931c-ff6a051f6b62-kube-api-access-v8cc9\" (UniqueName: \"kubernetes.io/projected/9df0cfda-870d-4b2a-931c-ff6a051f6b62-kube-api-access-v8cc9\") pod \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " Mar 14 00:17:51.053124 kubelet[3206]: I0314 00:17:51.052801 3206 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-backend-key-pair\") pod \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " Mar 14 00:17:51.053124 kubelet[3206]: I0314 00:17:51.052825 3206 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-nginx-config\" (UniqueName: \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-nginx-config\") pod \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\" (UID: \"9df0cfda-870d-4b2a-931c-ff6a051f6b62\") " Mar 14 00:17:51.058758 kubelet[3206]: I0314 00:17:51.058695 3206 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-nginx-config" pod "9df0cfda-870d-4b2a-931c-ff6a051f6b62" (UID: "9df0cfda-870d-4b2a-931c-ff6a051f6b62"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:17:51.062910 kubelet[3206]: I0314 00:17:51.062725 3206 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-ca-bundle" pod "9df0cfda-870d-4b2a-931c-ff6a051f6b62" (UID: "9df0cfda-870d-4b2a-931c-ff6a051f6b62"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:17:51.065583 kubelet[3206]: I0314 00:17:51.065503 3206 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-backend-key-pair" pod "9df0cfda-870d-4b2a-931c-ff6a051f6b62" (UID: "9df0cfda-870d-4b2a-931c-ff6a051f6b62"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:17:51.069881 systemd[1]: var-lib-kubelet-pods-9df0cfda\x2d870d\x2d4b2a\x2d931c\x2dff6a051f6b62-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:17:51.073237 kubelet[3206]: I0314 00:17:51.073070 3206 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df0cfda-870d-4b2a-931c-ff6a051f6b62-kube-api-access-v8cc9" pod "9df0cfda-870d-4b2a-931c-ff6a051f6b62" (UID: "9df0cfda-870d-4b2a-931c-ff6a051f6b62"). InnerVolumeSpecName "kube-api-access-v8cc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:17:51.075503 systemd[1]: var-lib-kubelet-pods-9df0cfda\x2d870d\x2d4b2a\x2d931c\x2dff6a051f6b62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8cc9.mount: Deactivated successfully. Mar 14 00:17:51.153984 kubelet[3206]: I0314 00:17:51.153684 3206 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-ca-bundle\") on node \"ip-172-31-23-179\" DevicePath \"\"" Mar 14 00:17:51.153984 kubelet[3206]: I0314 00:17:51.153722 3206 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v8cc9\" (UniqueName: \"kubernetes.io/projected/9df0cfda-870d-4b2a-931c-ff6a051f6b62-kube-api-access-v8cc9\") on node \"ip-172-31-23-179\" DevicePath \"\"" Mar 14 00:17:51.153984 kubelet[3206]: I0314 00:17:51.153738 3206 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9df0cfda-870d-4b2a-931c-ff6a051f6b62-whisker-backend-key-pair\") on node \"ip-172-31-23-179\" DevicePath \"\"" Mar 14 00:17:51.153984 kubelet[3206]: I0314 00:17:51.153754 3206 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9df0cfda-870d-4b2a-931c-ff6a051f6b62-nginx-config\") on node \"ip-172-31-23-179\" DevicePath \"\"" Mar 14 00:17:51.161256 systemd[1]: Removed slice kubepods-besteffort-pod9df0cfda_870d_4b2a_931c_ff6a051f6b62.slice - libcontainer container kubepods-besteffort-pod9df0cfda_870d_4b2a_931c_ff6a051f6b62.slice. Mar 14 00:17:51.663765 systemd[1]: Created slice kubepods-besteffort-pode4bfa371_f967_43a0_97d8_f79885f35268.slice - libcontainer container kubepods-besteffort-pode4bfa371_f967_43a0_97d8_f79885f35268.slice. Mar 14 00:17:51.757747 kubelet[3206]: I0314 00:17:51.757694 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e4bfa371-f967-43a0-97d8-f79885f35268-nginx-config\") pod \"whisker-679d9c7459-7n242\" (UID: \"e4bfa371-f967-43a0-97d8-f79885f35268\") " pod="calico-system/whisker-679d9c7459-7n242" Mar 14 00:17:51.757968 kubelet[3206]: I0314 00:17:51.757755 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqw9k\" (UniqueName: \"kubernetes.io/projected/e4bfa371-f967-43a0-97d8-f79885f35268-kube-api-access-vqw9k\") pod \"whisker-679d9c7459-7n242\" (UID: \"e4bfa371-f967-43a0-97d8-f79885f35268\") " pod="calico-system/whisker-679d9c7459-7n242" Mar 14 00:17:51.757968 kubelet[3206]: I0314 00:17:51.757792 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4bfa371-f967-43a0-97d8-f79885f35268-whisker-backend-key-pair\") pod \"whisker-679d9c7459-7n242\" (UID: \"e4bfa371-f967-43a0-97d8-f79885f35268\") " pod="calico-system/whisker-679d9c7459-7n242" Mar 14 00:17:51.757968 kubelet[3206]: I0314 00:17:51.757812 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4bfa371-f967-43a0-97d8-f79885f35268-whisker-ca-bundle\") pod \"whisker-679d9c7459-7n242\" (UID: \"e4bfa371-f967-43a0-97d8-f79885f35268\") " pod="calico-system/whisker-679d9c7459-7n242" Mar 14 00:17:51.846049 systemd[1]: Started sshd@9-172.31.23.179:22-68.220.241.50:49772.service - OpenSSH per-connection server daemon (68.220.241.50:49772). Mar 14 00:17:51.971153 containerd[1999]: time="2026-03-14T00:17:51.971012470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679d9c7459-7n242,Uid:e4bfa371-f967-43a0-97d8-f79885f35268,Namespace:calico-system,Attempt:0,}" Mar 14 00:17:52.152781 systemd-networkd[1807]: cali137a65b8352: Link UP Mar 14 00:17:52.153079 systemd-networkd[1807]: cali137a65b8352: Gained carrier Mar 14 00:17:52.158237 (udev-worker)[4845]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.019 [ERROR][4825] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.039 [INFO][4825] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0 whisker-679d9c7459- calico-system e4bfa371-f967-43a0-97d8-f79885f35268 999 0 2026-03-14 00:17:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:679d9c7459 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-179 whisker-679d9c7459-7n242 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali137a65b8352 [] [] }} ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.039 [INFO][4825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.072 [INFO][4836] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" HandleID="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Workload="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.083 [INFO][4836] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" HandleID="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Workload="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-179", "pod":"whisker-679d9c7459-7n242", "timestamp":"2026-03-14 00:17:52.072898535 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00049ef20)} Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.083 [INFO][4836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.083 [INFO][4836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.083 [INFO][4836] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.088 [INFO][4836] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.097 [INFO][4836] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.103 [INFO][4836] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.106 [INFO][4836] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.109 [INFO][4836] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.109 [INFO][4836] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.112 [INFO][4836] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859 Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.118 [INFO][4836] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.127 [INFO][4836] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.65/26] block=192.168.20.64/26 handle="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.128 [INFO][4836] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.65/26] handle="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" host="ip-172-31-23-179" Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.128 [INFO][4836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:52.178064 containerd[1999]: 2026-03-14 00:17:52.128 [INFO][4836] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.65/26] IPv6=[] ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" HandleID="k8s-pod-network.216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Workload="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.180620 containerd[1999]: 2026-03-14 00:17:52.131 [INFO][4825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0", GenerateName:"whisker-679d9c7459-", Namespace:"calico-system", SelfLink:"", UID:"e4bfa371-f967-43a0-97d8-f79885f35268", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679d9c7459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"whisker-679d9c7459-7n242", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali137a65b8352", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:17:52.180620 containerd[1999]: 2026-03-14 00:17:52.131 [INFO][4825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.65/32] ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.180620 containerd[1999]: 2026-03-14 00:17:52.131 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali137a65b8352 ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.180620 containerd[1999]: 2026-03-14 00:17:52.154 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.180620 containerd[1999]: 2026-03-14 00:17:52.155 [INFO][4825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0", GenerateName:"whisker-679d9c7459-", Namespace:"calico-system", SelfLink:"", UID:"e4bfa371-f967-43a0-97d8-f79885f35268", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679d9c7459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859", Pod:"whisker-679d9c7459-7n242", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali137a65b8352", MAC:"16:af:eb:bc:d6:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:17:52.180620 containerd[1999]: 2026-03-14 00:17:52.173 [INFO][4825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859" Namespace="calico-system" Pod="whisker-679d9c7459-7n242" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--679d9c7459--7n242-eth0" Mar 14 00:17:52.230807 containerd[1999]: time="2026-03-14T00:17:52.230344849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:52.230807 containerd[1999]: time="2026-03-14T00:17:52.230467464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:52.230807 containerd[1999]: time="2026-03-14T00:17:52.230494134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:52.231411 containerd[1999]: time="2026-03-14T00:17:52.230620800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:52.276614 systemd[1]: Started cri-containerd-216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859.scope - libcontainer container 216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859. Mar 14 00:17:52.445961 containerd[1999]: time="2026-03-14T00:17:52.445215395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679d9c7459-7n242,Uid:e4bfa371-f967-43a0-97d8-f79885f35268,Namespace:calico-system,Attempt:0,} returns sandbox id \"216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859\"" Mar 14 00:17:52.452351 sshd[4820]: Accepted publickey for core from 68.220.241.50 port 49772 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:17:52.459063 containerd[1999]: time="2026-03-14T00:17:52.458819747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:17:52.491312 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:52.516526 systemd-logind[1962]: New session 10 of user core. Mar 14 00:17:52.524574 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:17:53.137597 kubelet[3206]: I0314 00:17:53.137552 3206 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9df0cfda-870d-4b2a-931c-ff6a051f6b62" path="/var/lib/kubelet/pods/9df0cfda-870d-4b2a-931c-ff6a051f6b62/volumes" Mar 14 00:17:53.406383 kernel: calico-node[4988]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:17:53.463573 systemd-networkd[1807]: cali137a65b8352: Gained IPv6LL Mar 14 00:17:53.662919 sshd[4820]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:53.697226 systemd[1]: sshd@9-172.31.23.179:22-68.220.241.50:49772.service: Deactivated successfully. Mar 14 00:17:53.699577 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:17:53.707514 systemd-logind[1962]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:17:53.716611 systemd-logind[1962]: Removed session 10. Mar 14 00:17:54.790534 systemd-networkd[1807]: vxlan.calico: Link UP Mar 14 00:17:54.790546 systemd-networkd[1807]: vxlan.calico: Gained carrier Mar 14 00:17:54.954366 containerd[1999]: time="2026-03-14T00:17:54.940715238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:17:54.985049 containerd[1999]: time="2026-03-14T00:17:54.984993097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:54.989054 containerd[1999]: time="2026-03-14T00:17:54.989003311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.529330828s" Mar 14 00:17:54.989054 containerd[1999]: time="2026-03-14T00:17:54.989047236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:17:55.002744 containerd[1999]: time="2026-03-14T00:17:55.002208762Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:55.003346 containerd[1999]: time="2026-03-14T00:17:55.003291911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:55.013274 (udev-worker)[4844]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:17:55.075926 containerd[1999]: time="2026-03-14T00:17:55.075800061Z" level=info msg="CreateContainer within sandbox \"216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:17:55.255384 containerd[1999]: time="2026-03-14T00:17:55.255227470Z" level=info msg="CreateContainer within sandbox \"216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fc867bca40ab8f9d3dee7bb363dcc20887f9ae7fb1cdd16be05f2107d8ac060e\"" Mar 14 00:17:55.256832 containerd[1999]: time="2026-03-14T00:17:55.256056342Z" level=info msg="StartContainer for \"fc867bca40ab8f9d3dee7bb363dcc20887f9ae7fb1cdd16be05f2107d8ac060e\"" Mar 14 00:17:55.667593 systemd[1]: Started cri-containerd-fc867bca40ab8f9d3dee7bb363dcc20887f9ae7fb1cdd16be05f2107d8ac060e.scope - libcontainer container fc867bca40ab8f9d3dee7bb363dcc20887f9ae7fb1cdd16be05f2107d8ac060e. Mar 14 00:17:55.748583 containerd[1999]: time="2026-03-14T00:17:55.748362133Z" level=info msg="StartContainer for \"fc867bca40ab8f9d3dee7bb363dcc20887f9ae7fb1cdd16be05f2107d8ac060e\" returns successfully" Mar 14 00:17:55.759783 containerd[1999]: time="2026-03-14T00:17:55.759654074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:17:56.398506 systemd-networkd[1807]: vxlan.calico: Gained IPv6LL Mar 14 00:17:57.168963 containerd[1999]: time="2026-03-14T00:17:57.168911731Z" level=info msg="StopPodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\"" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.368 [INFO][5169] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.372 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" iface="eth0" netns="/var/run/netns/cni-9f977058-07de-2556-dd84-4ae3bdf701c1" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.372 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" iface="eth0" netns="/var/run/netns/cni-9f977058-07de-2556-dd84-4ae3bdf701c1" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.373 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" iface="eth0" netns="/var/run/netns/cni-9f977058-07de-2556-dd84-4ae3bdf701c1" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.373 [INFO][5169] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.374 [INFO][5169] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.656 [INFO][5176] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.656 [INFO][5176] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.656 [INFO][5176] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.668 [WARNING][5176] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.668 [INFO][5176] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.671 [INFO][5176] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:57.680602 containerd[1999]: 2026-03-14 00:17:57.676 [INFO][5169] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:17:57.685130 systemd[1]: run-netns-cni\x2d9f977058\x2d07de\x2d2556\x2ddd84\x2d4ae3bdf701c1.mount: Deactivated successfully. Mar 14 00:17:57.687905 containerd[1999]: time="2026-03-14T00:17:57.685067252Z" level=info msg="TearDown network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" successfully" Mar 14 00:17:57.687905 containerd[1999]: time="2026-03-14T00:17:57.687169683Z" level=info msg="StopPodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" returns successfully" Mar 14 00:17:57.752869 containerd[1999]: time="2026-03-14T00:17:57.752823081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6c7c5dc8-27jg8,Uid:9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120,Namespace:calico-system,Attempt:1,}" Mar 14 00:17:57.935923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976544379.mount: Deactivated successfully. Mar 14 00:17:57.952276 containerd[1999]: time="2026-03-14T00:17:57.951438084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:57.953853 containerd[1999]: time="2026-03-14T00:17:57.953795233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:17:57.955419 containerd[1999]: time="2026-03-14T00:17:57.955036175Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:57.960947 containerd[1999]: time="2026-03-14T00:17:57.960875898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:57.963770 containerd[1999]: time="2026-03-14T00:17:57.963613562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.203913342s" Mar 14 00:17:57.963770 containerd[1999]: time="2026-03-14T00:17:57.963663343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:17:58.013835 (udev-worker)[5074]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:17:58.019612 systemd-networkd[1807]: cali78c80336f12: Link UP Mar 14 00:17:58.019908 systemd-networkd[1807]: cali78c80336f12: Gained carrier Mar 14 00:17:58.036350 containerd[1999]: time="2026-03-14T00:17:58.035037481Z" level=info msg="CreateContainer within sandbox \"216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:17:58.057877 containerd[1999]: time="2026-03-14T00:17:58.057804459Z" level=info msg="CreateContainer within sandbox \"216754affd43ca4a6d977cc12f42f4f0413c5aafa77984558a1ec08c9ffd8859\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0789412d5c857f5eec3cf5ce1b70dea92be90300c622c3e9821e72c199d24afa\"" Mar 14 00:17:58.060075 containerd[1999]: time="2026-03-14T00:17:58.060029893Z" level=info msg="StartContainer for \"0789412d5c857f5eec3cf5ce1b70dea92be90300c622c3e9821e72c199d24afa\"" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.861 [INFO][5190] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0 calico-kube-controllers-7b6c7c5dc8- calico-system 9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120 1036 0 2026-03-14 00:17:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b6c7c5dc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-179 calico-kube-controllers-7b6c7c5dc8-27jg8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali78c80336f12 [] [] }} ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.862 [INFO][5190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.913 [INFO][5198] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" HandleID="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.928 [INFO][5198] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" HandleID="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-179", "pod":"calico-kube-controllers-7b6c7c5dc8-27jg8", "timestamp":"2026-03-14 00:17:57.913375005 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.928 [INFO][5198] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.928 [INFO][5198] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.928 [INFO][5198] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.938 [INFO][5198] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.948 [INFO][5198] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.957 [INFO][5198] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.962 [INFO][5198] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.967 [INFO][5198] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.968 [INFO][5198] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.974 [INFO][5198] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443 Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:57.984 [INFO][5198] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:58.002 [INFO][5198] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.66/26] block=192.168.20.64/26 handle="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:58.002 [INFO][5198] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.66/26] handle="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" host="ip-172-31-23-179" Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:58.002 [INFO][5198] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:58.061472 containerd[1999]: 2026-03-14 00:17:58.003 [INFO][5198] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.66/26] IPv6=[] ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" HandleID="k8s-pod-network.0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.097287 containerd[1999]: 2026-03-14 00:17:58.009 [INFO][5190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0", GenerateName:"calico-kube-controllers-7b6c7c5dc8-", Namespace:"calico-system", SelfLink:"", UID:"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6c7c5dc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"calico-kube-controllers-7b6c7c5dc8-27jg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c80336f12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:17:58.097287 containerd[1999]: 2026-03-14 00:17:58.009 [INFO][5190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.66/32] ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.097287 containerd[1999]: 2026-03-14 00:17:58.009 [INFO][5190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78c80336f12 ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.097287 containerd[1999]: 2026-03-14 00:17:58.022 [INFO][5190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.097287 containerd[1999]: 2026-03-14 00:17:58.030 [INFO][5190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0", GenerateName:"calico-kube-controllers-7b6c7c5dc8-", Namespace:"calico-system", SelfLink:"", UID:"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6c7c5dc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443", Pod:"calico-kube-controllers-7b6c7c5dc8-27jg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c80336f12", MAC:"76:55:4c:ab:1c:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:17:58.097287 containerd[1999]: 2026-03-14 00:17:58.055 [INFO][5190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443" Namespace="calico-system" Pod="calico-kube-controllers-7b6c7c5dc8-27jg8" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:17:58.109711 systemd[1]: Started cri-containerd-0789412d5c857f5eec3cf5ce1b70dea92be90300c622c3e9821e72c199d24afa.scope - libcontainer container 0789412d5c857f5eec3cf5ce1b70dea92be90300c622c3e9821e72c199d24afa. Mar 14 00:17:58.129749 containerd[1999]: time="2026-03-14T00:17:58.127950071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:58.129892 containerd[1999]: time="2026-03-14T00:17:58.129820738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:58.129968 containerd[1999]: time="2026-03-14T00:17:58.129883544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:58.130151 containerd[1999]: time="2026-03-14T00:17:58.130016189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:58.159470 systemd[1]: Started cri-containerd-0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443.scope - libcontainer container 0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443. Mar 14 00:17:58.241486 containerd[1999]: time="2026-03-14T00:17:58.241008413Z" level=info msg="StartContainer for \"0789412d5c857f5eec3cf5ce1b70dea92be90300c622c3e9821e72c199d24afa\" returns successfully" Mar 14 00:17:58.266608 containerd[1999]: time="2026-03-14T00:17:58.266552202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6c7c5dc8-27jg8,Uid:9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120,Namespace:calico-system,Attempt:1,} returns sandbox id \"0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443\"" Mar 14 00:17:58.271869 containerd[1999]: time="2026-03-14T00:17:58.271517740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:17:58.748003 systemd[1]: Started sshd@10-172.31.23.179:22-68.220.241.50:49246.service - OpenSSH per-connection server daemon (68.220.241.50:49246). Mar 14 00:17:59.084836 containerd[1999]: time="2026-03-14T00:17:59.084767138Z" level=info msg="StopPodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\"" Mar 14 00:17:59.143488 containerd[1999]: time="2026-03-14T00:17:59.143438651Z" level=info msg="StopPodSandbox for \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\"" Mar 14 00:17:59.148511 containerd[1999]: time="2026-03-14T00:17:59.148441968Z" level=info msg="StopPodSandbox for \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\"" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.141 [WARNING][5312] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.142 [INFO][5312] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.142 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" iface="eth0" netns="" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.142 [INFO][5312] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.143 [INFO][5312] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.217 [INFO][5320] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.217 [INFO][5320] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.217 [INFO][5320] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.235 [WARNING][5320] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.235 [INFO][5320] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.238 [INFO][5320] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:59.246203 containerd[1999]: 2026-03-14 00:17:59.241 [INFO][5312] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.246203 containerd[1999]: time="2026-03-14T00:17:59.246020733Z" level=info msg="TearDown network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" successfully" Mar 14 00:17:59.246203 containerd[1999]: time="2026-03-14T00:17:59.246056370Z" level=info msg="StopPodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" returns successfully" Mar 14 00:17:59.255677 containerd[1999]: time="2026-03-14T00:17:59.255099084Z" level=info msg="RemovePodSandbox for \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\"" Mar 14 00:17:59.272498 sshd[5302]: Accepted publickey for core from 68.220.241.50 port 49246 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:17:59.277015 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:59.284363 containerd[1999]: time="2026-03-14T00:17:59.284201982Z" level=info msg="Forcibly stopping sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\"" Mar 14 00:17:59.293370 systemd-logind[1962]: New session 11 of user core. Mar 14 00:17:59.299826 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:17:59.331537 kubelet[3206]: I0314 00:17:59.331364 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-679d9c7459-7n242" podStartSLOduration=2.819375071 podStartE2EDuration="8.328233741s" podCreationTimestamp="2026-03-14 00:17:51 +0000 UTC" firstStartedPulling="2026-03-14 00:17:52.457270333 +0000 UTC m=+53.524693532" lastFinishedPulling="2026-03-14 00:17:57.966128991 +0000 UTC m=+59.033552202" observedRunningTime="2026-03-14 00:17:58.5742901 +0000 UTC m=+59.641713321" watchObservedRunningTime="2026-03-14 00:17:59.328233741 +0000 UTC m=+60.395656964" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.331 [INFO][5343] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.332 [INFO][5343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" iface="eth0" netns="/var/run/netns/cni-ca2b0c35-bfb6-c583-6ff1-82aa69a6a689" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.332 [INFO][5343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" iface="eth0" netns="/var/run/netns/cni-ca2b0c35-bfb6-c583-6ff1-82aa69a6a689" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.333 [INFO][5343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" iface="eth0" netns="/var/run/netns/cni-ca2b0c35-bfb6-c583-6ff1-82aa69a6a689" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.333 [INFO][5343] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.333 [INFO][5343] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.391 [INFO][5372] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" HandleID="k8s-pod-network.7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.392 [INFO][5372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.392 [INFO][5372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.402 [WARNING][5372] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" HandleID="k8s-pod-network.7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.402 [INFO][5372] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" HandleID="k8s-pod-network.7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.405 [INFO][5372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:59.424497 containerd[1999]: 2026-03-14 00:17:59.410 [INFO][5343] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a" Mar 14 00:17:59.428928 systemd[1]: run-netns-cni\x2dca2b0c35\x2dbfb6\x2dc583\x2d6ff1\x2d82aa69a6a689.mount: Deactivated successfully. Mar 14 00:17:59.431242 containerd[1999]: time="2026-03-14T00:17:59.431055129Z" level=info msg="TearDown network for sandbox \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\" successfully" Mar 14 00:17:59.431242 containerd[1999]: time="2026-03-14T00:17:59.431096571Z" level=info msg="StopPodSandbox for \"7b3f5211edbae4de8398b026e2345d82116f59a0955cbba0dfb0e980ab56a68a\" returns successfully" Mar 14 00:17:59.435819 containerd[1999]: time="2026-03-14T00:17:59.435608529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b5j2r,Uid:e85a1e0f-e7ad-4031-9d89-5b7c43cae302,Namespace:kube-system,Attempt:1,}" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.337 [INFO][5345] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.341 [INFO][5345] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" iface="eth0" netns="/var/run/netns/cni-83e2c5f8-5208-c150-30be-6bea4e50f111" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.342 [INFO][5345] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" iface="eth0" netns="/var/run/netns/cni-83e2c5f8-5208-c150-30be-6bea4e50f111" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.343 [INFO][5345] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" iface="eth0" netns="/var/run/netns/cni-83e2c5f8-5208-c150-30be-6bea4e50f111" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.344 [INFO][5345] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.344 [INFO][5345] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.411 [INFO][5377] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" HandleID="k8s-pod-network.2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.411 [INFO][5377] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.411 [INFO][5377] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.436 [WARNING][5377] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" HandleID="k8s-pod-network.2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.436 [INFO][5377] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" HandleID="k8s-pod-network.2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.441 [INFO][5377] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:59.461791 containerd[1999]: 2026-03-14 00:17:59.445 [INFO][5345] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4" Mar 14 00:17:59.465543 containerd[1999]: time="2026-03-14T00:17:59.463642331Z" level=info msg="TearDown network for sandbox \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\" successfully" Mar 14 00:17:59.465543 containerd[1999]: time="2026-03-14T00:17:59.463679033Z" level=info msg="StopPodSandbox for \"2590695468abe8c58989f236737757aff8ed44918fae7db28a808e9fae292ec4\" returns successfully" Mar 14 00:17:59.473777 systemd[1]: run-netns-cni\x2d83e2c5f8\x2d5208\x2dc150\x2d30be\x2d6bea4e50f111.mount: Deactivated successfully. Mar 14 00:17:59.481941 containerd[1999]: time="2026-03-14T00:17:59.481897587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-6tjs6,Uid:83c05f9a-996d-4297-b2d6-b3ceffdc570f,Namespace:calico-system,Attempt:1,}" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.433 [WARNING][5367] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" WorkloadEndpoint="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.435 [INFO][5367] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.435 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" iface="eth0" netns="" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.435 [INFO][5367] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.435 [INFO][5367] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.538 [INFO][5389] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.548 [INFO][5389] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.551 [INFO][5389] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.577 [WARNING][5389] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.577 [INFO][5389] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" HandleID="k8s-pod-network.5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Workload="ip--172--31--23--179-k8s-whisker--579d8fdf8b--qdqw8-eth0" Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.582 [INFO][5389] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:17:59.594261 containerd[1999]: 2026-03-14 00:17:59.587 [INFO][5367] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae" Mar 14 00:17:59.594261 containerd[1999]: time="2026-03-14T00:17:59.594187178Z" level=info msg="TearDown network for sandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" successfully" Mar 14 00:17:59.645500 containerd[1999]: time="2026-03-14T00:17:59.645110416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:17:59.645500 containerd[1999]: time="2026-03-14T00:17:59.645446426Z" level=info msg="RemovePodSandbox \"5458531aeff02afa56e4b04d766453041af0860976e0caed7bf832682e9f22ae\" returns successfully" Mar 14 00:17:59.647582 containerd[1999]: time="2026-03-14T00:17:59.647543040Z" level=info msg="StopPodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\"" Mar 14 00:17:59.663239 systemd-networkd[1807]: cali78c80336f12: Gained IPv6LL Mar 14 00:18:00.012498 systemd-networkd[1807]: cali535ce32ec59: Link UP Mar 14 00:18:00.017105 systemd-networkd[1807]: cali535ce32ec59: Gained carrier Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.637 [INFO][5394] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0 coredns-7d764666f9- kube-system e85a1e0f-e7ad-4031-9d89-5b7c43cae302 1060 0 2026-03-14 00:17:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-179 coredns-7d764666f9-b5j2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali535ce32ec59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.638 [INFO][5394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.798 [INFO][5428] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" HandleID="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.831 [INFO][5428] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" HandleID="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000394870), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-179", "pod":"coredns-7d764666f9-b5j2r", "timestamp":"2026-03-14 00:17:59.79799589 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00018b340)} Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.831 [INFO][5428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.831 [INFO][5428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.831 [INFO][5428] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.843 [INFO][5428] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.861 [INFO][5428] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.877 [INFO][5428] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.882 [INFO][5428] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.888 [INFO][5428] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.888 [INFO][5428] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.893 [INFO][5428] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7 Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.903 [INFO][5428] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.990 [INFO][5428] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.67/26] block=192.168.20.64/26 handle="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.990 [INFO][5428] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.67/26] handle="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" host="ip-172-31-23-179" Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.992 [INFO][5428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:00.103950 containerd[1999]: 2026-03-14 00:17:59.993 [INFO][5428] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.67/26] IPv6=[] ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" HandleID="k8s-pod-network.960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.104958 containerd[1999]: 2026-03-14 00:17:59.999 [INFO][5394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e85a1e0f-e7ad-4031-9d89-5b7c43cae302", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"coredns-7d764666f9-b5j2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535ce32ec59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:00.104958 containerd[1999]: 2026-03-14 00:18:00.001 [INFO][5394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.67/32] ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.104958 containerd[1999]: 2026-03-14 00:18:00.002 [INFO][5394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali535ce32ec59 ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.104958 containerd[1999]: 2026-03-14 00:18:00.021 [INFO][5394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.104958 containerd[1999]: 2026-03-14 00:18:00.023 [INFO][5394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e85a1e0f-e7ad-4031-9d89-5b7c43cae302", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7", Pod:"coredns-7d764666f9-b5j2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535ce32ec59", MAC:"b2:35:e8:70:c4:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:00.104958 containerd[1999]: 2026-03-14 00:18:00.087 [INFO][5394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7" Namespace="kube-system" Pod="coredns-7d764666f9-b5j2r" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--b5j2r-eth0" Mar 14 00:18:00.154678 containerd[1999]: time="2026-03-14T00:18:00.154634012Z" level=info msg="StopPodSandbox for \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\"" Mar 14 00:18:00.171248 containerd[1999]: time="2026-03-14T00:18:00.170878842Z" level=info msg="StopPodSandbox for \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\"" Mar 14 00:18:00.178593 containerd[1999]: time="2026-03-14T00:18:00.178437174Z" level=info msg="StopPodSandbox for \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\"" Mar 14 00:18:00.212272 systemd-networkd[1807]: calib62384b8f23: Link UP Mar 14 00:18:00.218224 systemd-networkd[1807]: calib62384b8f23: Gained carrier Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.842 [WARNING][5442] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0", GenerateName:"calico-kube-controllers-7b6c7c5dc8-", Namespace:"calico-system", SelfLink:"", UID:"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6c7c5dc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443", Pod:"calico-kube-controllers-7b6c7c5dc8-27jg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c80336f12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.844 [INFO][5442] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.845 [INFO][5442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" iface="eth0" netns="" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.845 [INFO][5442] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.845 [INFO][5442] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.939 [INFO][5456] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:17:59.940 [INFO][5456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:18:00.137 [INFO][5456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:18:00.201 [WARNING][5456] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:18:00.202 [INFO][5456] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:18:00.211 [INFO][5456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:00.271581 containerd[1999]: 2026-03-14 00:18:00.232 [INFO][5442] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:00.273366 containerd[1999]: time="2026-03-14T00:18:00.272091117Z" level=info msg="TearDown network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" successfully" Mar 14 00:18:00.273366 containerd[1999]: time="2026-03-14T00:18:00.272128259Z" level=info msg="StopPodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" returns successfully" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.641 [INFO][5402] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0 calico-apiserver-6b7865fd67- calico-system 83c05f9a-996d-4297-b2d6-b3ceffdc570f 1061 0 2026-03-14 00:17:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7865fd67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-179 calico-apiserver-6b7865fd67-6tjs6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib62384b8f23 [] [] }} ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.641 [INFO][5402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.839 [INFO][5429] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" HandleID="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.871 [INFO][5429] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" HandleID="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122580), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-179", "pod":"calico-apiserver-6b7865fd67-6tjs6", "timestamp":"2026-03-14 00:17:59.839161144 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000446580)} Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.871 [INFO][5429] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.990 [INFO][5429] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.991 [INFO][5429] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:17:59.997 [INFO][5429] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.039 [INFO][5429] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.065 [INFO][5429] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.082 [INFO][5429] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.096 [INFO][5429] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.097 [INFO][5429] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.103 [INFO][5429] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.119 [INFO][5429] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.134 [INFO][5429] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.68/26] block=192.168.20.64/26 handle="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.135 [INFO][5429] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.68/26] handle="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" host="ip-172-31-23-179" Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.135 [INFO][5429] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:00.280971 containerd[1999]: 2026-03-14 00:18:00.135 [INFO][5429] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.68/26] IPv6=[] ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" HandleID="k8s-pod-network.50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.283739 containerd[1999]: 2026-03-14 00:18:00.157 [INFO][5402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0", GenerateName:"calico-apiserver-6b7865fd67-", Namespace:"calico-system", SelfLink:"", UID:"83c05f9a-996d-4297-b2d6-b3ceffdc570f", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7865fd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"calico-apiserver-6b7865fd67-6tjs6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib62384b8f23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:00.283739 containerd[1999]: 2026-03-14 00:18:00.157 [INFO][5402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.68/32] ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.283739 containerd[1999]: 2026-03-14 00:18:00.157 [INFO][5402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib62384b8f23 ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.283739 containerd[1999]: 2026-03-14 00:18:00.220 [INFO][5402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.283739 containerd[1999]: 2026-03-14 00:18:00.227 [INFO][5402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0", GenerateName:"calico-apiserver-6b7865fd67-", Namespace:"calico-system", SelfLink:"", UID:"83c05f9a-996d-4297-b2d6-b3ceffdc570f", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7865fd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f", Pod:"calico-apiserver-6b7865fd67-6tjs6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib62384b8f23", MAC:"ae:34:1b:39:86:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:00.283739 containerd[1999]: 2026-03-14 00:18:00.266 [INFO][5402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-6tjs6" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--6tjs6-eth0" Mar 14 00:18:00.301143 containerd[1999]: time="2026-03-14T00:18:00.300933211Z" level=info msg="RemovePodSandbox for \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\"" Mar 14 00:18:00.301952 containerd[1999]: time="2026-03-14T00:18:00.301884037Z" level=info msg="Forcibly stopping sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\"" Mar 14 00:18:00.392822 containerd[1999]: time="2026-03-14T00:18:00.379886920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:00.392822 containerd[1999]: time="2026-03-14T00:18:00.379978369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:00.392822 containerd[1999]: time="2026-03-14T00:18:00.380004107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:00.392822 containerd[1999]: time="2026-03-14T00:18:00.380113996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:00.520393 containerd[1999]: time="2026-03-14T00:18:00.519911418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:00.520393 containerd[1999]: time="2026-03-14T00:18:00.520029817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:00.520393 containerd[1999]: time="2026-03-14T00:18:00.520048297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:00.520393 containerd[1999]: time="2026-03-14T00:18:00.520185009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:00.576061 systemd[1]: Started cri-containerd-960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7.scope - libcontainer container 960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7. Mar 14 00:18:00.687590 systemd[1]: Started cri-containerd-50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f.scope - libcontainer container 50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f. Mar 14 00:18:00.927628 sshd[5302]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:00.938825 containerd[1999]: time="2026-03-14T00:18:00.938443290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b5j2r,Uid:e85a1e0f-e7ad-4031-9d89-5b7c43cae302,Namespace:kube-system,Attempt:1,} returns sandbox id \"960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7\"" Mar 14 00:18:00.941875 systemd[1]: sshd@10-172.31.23.179:22-68.220.241.50:49246.service: Deactivated successfully. Mar 14 00:18:00.947316 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:18:00.960196 systemd-logind[1962]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:18:00.966203 systemd-logind[1962]: Removed session 11. Mar 14 00:18:00.980896 containerd[1999]: time="2026-03-14T00:18:00.980661291Z" level=info msg="CreateContainer within sandbox \"960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.740 [INFO][5509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.741 [INFO][5509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" iface="eth0" netns="/var/run/netns/cni-71e817e5-5b07-2434-1f36-c64109424192" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.741 [INFO][5509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" iface="eth0" netns="/var/run/netns/cni-71e817e5-5b07-2434-1f36-c64109424192" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.742 [INFO][5509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" iface="eth0" netns="/var/run/netns/cni-71e817e5-5b07-2434-1f36-c64109424192" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.742 [INFO][5509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.742 [INFO][5509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.934 [INFO][5614] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" HandleID="k8s-pod-network.d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.941 [INFO][5614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.941 [INFO][5614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.985 [WARNING][5614] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" HandleID="k8s-pod-network.d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:00.985 [INFO][5614] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" HandleID="k8s-pod-network.d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:01.001 [INFO][5614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:01.037631 containerd[1999]: 2026-03-14 00:18:01.019 [INFO][5509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0" Mar 14 00:18:01.042540 containerd[1999]: time="2026-03-14T00:18:01.038800303Z" level=info msg="TearDown network for sandbox \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\" successfully" Mar 14 00:18:01.042540 containerd[1999]: time="2026-03-14T00:18:01.038838293Z" level=info msg="StopPodSandbox for \"d473579dfb30df32df8568cbf22d6e8e98aaa01da94731237b84720e32647db0\" returns successfully" Mar 14 00:18:01.053550 systemd[1]: run-netns-cni\x2d71e817e5\x2d5b07\x2d2434\x2d1f36\x2dc64109424192.mount: Deactivated successfully. Mar 14 00:18:01.054597 containerd[1999]: time="2026-03-14T00:18:01.054417506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-f9cpm,Uid:254c54ac-8310-4004-8ce4-5125625b2db5,Namespace:calico-system,Attempt:1,}" Mar 14 00:18:01.071046 systemd-networkd[1807]: cali535ce32ec59: Gained IPv6LL Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:00.879 [WARNING][5565] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0", GenerateName:"calico-kube-controllers-7b6c7c5dc8-", Namespace:"calico-system", SelfLink:"", UID:"9b3ad80f-d1e3-4c5e-9df2-fb2b5357e120", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6c7c5dc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443", Pod:"calico-kube-controllers-7b6c7c5dc8-27jg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c80336f12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:00.880 [INFO][5565] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:00.880 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" iface="eth0" netns="" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:00.881 [INFO][5565] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:00.881 [INFO][5565] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.018 [INFO][5645] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.019 [INFO][5645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.020 [INFO][5645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.068 [WARNING][5645] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.068 [INFO][5645] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" HandleID="k8s-pod-network.6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Workload="ip--172--31--23--179-k8s-calico--kube--controllers--7b6c7c5dc8--27jg8-eth0" Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.076 [INFO][5645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:01.083067 containerd[1999]: 2026-03-14 00:18:01.080 [INFO][5565] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6" Mar 14 00:18:01.083067 containerd[1999]: time="2026-03-14T00:18:01.082001082Z" level=info msg="TearDown network for sandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" successfully" Mar 14 00:18:01.086198 containerd[1999]: time="2026-03-14T00:18:01.086027183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:18:01.086198 containerd[1999]: time="2026-03-14T00:18:01.086099177Z" level=info msg="RemovePodSandbox \"6db2971eb82a7fe7b1ebb1e5bbe552954b68fa5a8417d2ef06462c5ff8d74bb6\" returns successfully" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:00.877 [INFO][5546] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:00.880 [INFO][5546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" iface="eth0" netns="/var/run/netns/cni-8f699448-5b23-4328-c1c9-de174303937b" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:00.881 [INFO][5546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" iface="eth0" netns="/var/run/netns/cni-8f699448-5b23-4328-c1c9-de174303937b" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:00.881 [INFO][5546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" iface="eth0" netns="/var/run/netns/cni-8f699448-5b23-4328-c1c9-de174303937b" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:00.881 [INFO][5546] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:00.881 [INFO][5546] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.062 [INFO][5644] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" HandleID="k8s-pod-network.165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.062 [INFO][5644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.076 [INFO][5644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.095 [WARNING][5644] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" HandleID="k8s-pod-network.165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.095 [INFO][5644] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" HandleID="k8s-pod-network.165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.098 [INFO][5644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:01.113612 containerd[1999]: 2026-03-14 00:18:01.102 [INFO][5546] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66" Mar 14 00:18:01.116455 containerd[1999]: time="2026-03-14T00:18:01.115582276Z" level=info msg="TearDown network for sandbox \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\" successfully" Mar 14 00:18:01.116858 containerd[1999]: time="2026-03-14T00:18:01.116818574Z" level=info msg="StopPodSandbox for \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\" returns successfully" Mar 14 00:18:01.123569 containerd[1999]: time="2026-03-14T00:18:01.123049531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gmvx8,Uid:d60e0a0f-100e-4b16-8d9e-155aae6fac41,Namespace:kube-system,Attempt:1,}" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:00.764 [INFO][5519] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:00.766 [INFO][5519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" iface="eth0" netns="/var/run/netns/cni-0f80b210-b168-2704-c386-6fa94857ba10" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:00.766 [INFO][5519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" iface="eth0" netns="/var/run/netns/cni-0f80b210-b168-2704-c386-6fa94857ba10" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:00.767 [INFO][5519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" iface="eth0" netns="/var/run/netns/cni-0f80b210-b168-2704-c386-6fa94857ba10" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:00.767 [INFO][5519] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:00.767 [INFO][5519] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.067 [INFO][5620] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" HandleID="k8s-pod-network.5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Workload="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.067 [INFO][5620] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.099 [INFO][5620] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.119 [WARNING][5620] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" HandleID="k8s-pod-network.5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Workload="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.119 [INFO][5620] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" HandleID="k8s-pod-network.5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Workload="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.125 [INFO][5620] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:01.175219 containerd[1999]: 2026-03-14 00:18:01.154 [INFO][5519] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1" Mar 14 00:18:01.177052 containerd[1999]: time="2026-03-14T00:18:01.175764369Z" level=info msg="TearDown network for sandbox \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\" successfully" Mar 14 00:18:01.177052 containerd[1999]: time="2026-03-14T00:18:01.175798714Z" level=info msg="StopPodSandbox for \"5a41a15b9b5fd6cb73ac05c952e2f1cb3ec62d0068cdcfab2e8151e937776bf1\" returns successfully" Mar 14 00:18:01.181573 containerd[1999]: time="2026-03-14T00:18:01.180296000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwsn2,Uid:69ad5735-6879-4268-98e4-4112168b29a6,Namespace:calico-system,Attempt:1,}" Mar 14 00:18:01.206817 containerd[1999]: time="2026-03-14T00:18:01.206744157Z" level=info msg="CreateContainer within sandbox \"960cfc1dddc0556d6875271400dc6700b76077f9cf4ae817caa46d0e605d39d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3384af840e57d15b5691fe42c22ef89c8a70a7044e03f48aaa7c8395a51c046b\"" Mar 14 00:18:01.211071 containerd[1999]: time="2026-03-14T00:18:01.210721747Z" level=info msg="StartContainer for \"3384af840e57d15b5691fe42c22ef89c8a70a7044e03f48aaa7c8395a51c046b\"" Mar 14 00:18:01.235272 containerd[1999]: time="2026-03-14T00:18:01.235132097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-6tjs6,Uid:83c05f9a-996d-4297-b2d6-b3ceffdc570f,Namespace:calico-system,Attempt:1,} returns sandbox id \"50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f\"" Mar 14 00:18:01.358600 systemd[1]: Started cri-containerd-3384af840e57d15b5691fe42c22ef89c8a70a7044e03f48aaa7c8395a51c046b.scope - libcontainer container 3384af840e57d15b5691fe42c22ef89c8a70a7044e03f48aaa7c8395a51c046b. Mar 14 00:18:01.441288 systemd[1]: run-netns-cni\x2d0f80b210\x2db168\x2d2704\x2dc386\x2d6fa94857ba10.mount: Deactivated successfully. Mar 14 00:18:01.442078 systemd[1]: run-netns-cni\x2d8f699448\x2d5b23\x2d4328\x2dc1c9\x2dde174303937b.mount: Deactivated successfully. Mar 14 00:18:01.487829 containerd[1999]: time="2026-03-14T00:18:01.487622236Z" level=info msg="StartContainer for \"3384af840e57d15b5691fe42c22ef89c8a70a7044e03f48aaa7c8395a51c046b\" returns successfully" Mar 14 00:18:01.520223 systemd-networkd[1807]: calib62384b8f23: Gained IPv6LL Mar 14 00:18:01.668289 kubelet[3206]: I0314 00:18:01.668220 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b5j2r" podStartSLOduration=57.668202506 podStartE2EDuration="57.668202506s" podCreationTimestamp="2026-03-14 00:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:18:01.666603174 +0000 UTC m=+62.734026396" watchObservedRunningTime="2026-03-14 00:18:01.668202506 +0000 UTC m=+62.735625728" Mar 14 00:18:01.694135 systemd-networkd[1807]: caliebd05a362e7: Link UP Mar 14 00:18:01.710894 systemd-networkd[1807]: caliebd05a362e7: Gained carrier Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.392 [INFO][5688] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0 coredns-7d764666f9- kube-system d60e0a0f-100e-4b16-8d9e-155aae6fac41 1078 0 2026-03-14 00:17:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-179 coredns-7d764666f9-gmvx8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebd05a362e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.392 [INFO][5688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.503 [INFO][5749] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" HandleID="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.521 [INFO][5749] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" HandleID="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-179", "pod":"coredns-7d764666f9-gmvx8", "timestamp":"2026-03-14 00:18:01.503258388 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002eac60)} Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.522 [INFO][5749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.522 [INFO][5749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.522 [INFO][5749] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.536 [INFO][5749] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.552 [INFO][5749] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.567 [INFO][5749] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.574 [INFO][5749] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.584 [INFO][5749] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.584 [INFO][5749] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.588 [INFO][5749] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610 Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.616 [INFO][5749] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.659 [INFO][5749] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.69/26] block=192.168.20.64/26 handle="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.659 [INFO][5749] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.69/26] handle="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" host="ip-172-31-23-179" Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.659 [INFO][5749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:01.760620 containerd[1999]: 2026-03-14 00:18:01.659 [INFO][5749] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.69/26] IPv6=[] ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" HandleID="k8s-pod-network.07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Workload="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.761822 containerd[1999]: 2026-03-14 00:18:01.676 [INFO][5688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d60e0a0f-100e-4b16-8d9e-155aae6fac41", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"coredns-7d764666f9-gmvx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebd05a362e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:01.761822 containerd[1999]: 2026-03-14 00:18:01.677 [INFO][5688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.69/32] ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.761822 containerd[1999]: 2026-03-14 00:18:01.677 [INFO][5688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebd05a362e7 ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.761822 containerd[1999]: 2026-03-14 00:18:01.722 [INFO][5688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:01.761822 containerd[1999]: 2026-03-14 00:18:01.723 [INFO][5688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d60e0a0f-100e-4b16-8d9e-155aae6fac41", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610", Pod:"coredns-7d764666f9-gmvx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebd05a362e7", MAC:"f6:6a:fd:7c:74:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:01.761822 containerd[1999]: 2026-03-14 00:18:01.751 [INFO][5688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610" Namespace="kube-system" Pod="coredns-7d764666f9-gmvx8" WorkloadEndpoint="ip--172--31--23--179-k8s-coredns--7d764666f9--gmvx8-eth0" Mar 14 00:18:02.148644 containerd[1999]: time="2026-03-14T00:18:02.148360673Z" level=info msg="StopPodSandbox for \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\"" Mar 14 00:18:02.184962 systemd-networkd[1807]: cali19bfee13a71: Link UP Mar 14 00:18:02.253681 systemd-networkd[1807]: cali19bfee13a71: Gained carrier Mar 14 00:18:02.391259 containerd[1999]: time="2026-03-14T00:18:02.380884333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:02.391259 containerd[1999]: time="2026-03-14T00:18:02.380964504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:02.391259 containerd[1999]: time="2026-03-14T00:18:02.380986344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:02.391259 containerd[1999]: time="2026-03-14T00:18:02.381101990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.267 [INFO][5675] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0 calico-apiserver-6b7865fd67- calico-system 254c54ac-8310-4004-8ce4-5125625b2db5 1075 0 2026-03-14 00:17:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7865fd67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-179 calico-apiserver-6b7865fd67-f9cpm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali19bfee13a71 [] [] }} ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.267 [INFO][5675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.518 [INFO][5716] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" HandleID="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.547 [INFO][5716] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" HandleID="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00055c120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-179", "pod":"calico-apiserver-6b7865fd67-f9cpm", "timestamp":"2026-03-14 00:18:01.51810918 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000456000)} Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.547 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.661 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.662 [INFO][5716] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.679 [INFO][5716] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.733 [INFO][5716] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.793 [INFO][5716] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.832 [INFO][5716] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.890 [INFO][5716] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.891 [INFO][5716] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.942 [INFO][5716] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6 Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:01.978 [INFO][5716] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:02.059 [INFO][5716] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.70/26] block=192.168.20.64/26 handle="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:02.059 [INFO][5716] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.70/26] handle="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" host="ip-172-31-23-179" Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:02.059 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:02.446390 containerd[1999]: 2026-03-14 00:18:02.059 [INFO][5716] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.70/26] IPv6=[] ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" HandleID="k8s-pod-network.16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Workload="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.448725 containerd[1999]: 2026-03-14 00:18:02.100 [INFO][5675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0", GenerateName:"calico-apiserver-6b7865fd67-", Namespace:"calico-system", SelfLink:"", UID:"254c54ac-8310-4004-8ce4-5125625b2db5", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7865fd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"calico-apiserver-6b7865fd67-f9cpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali19bfee13a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:02.448725 containerd[1999]: 2026-03-14 00:18:02.105 [INFO][5675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.70/32] ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.448725 containerd[1999]: 2026-03-14 00:18:02.127 [INFO][5675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19bfee13a71 ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.448725 containerd[1999]: 2026-03-14 00:18:02.238 [INFO][5675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.448725 containerd[1999]: 2026-03-14 00:18:02.252 [INFO][5675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0", GenerateName:"calico-apiserver-6b7865fd67-", Namespace:"calico-system", SelfLink:"", UID:"254c54ac-8310-4004-8ce4-5125625b2db5", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7865fd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6", Pod:"calico-apiserver-6b7865fd67-f9cpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali19bfee13a71", MAC:"ea:82:9b:dc:18:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:02.448725 containerd[1999]: 2026-03-14 00:18:02.388 [INFO][5675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6" Namespace="calico-system" Pod="calico-apiserver-6b7865fd67-f9cpm" WorkloadEndpoint="ip--172--31--23--179-k8s-calico--apiserver--6b7865fd67--f9cpm-eth0" Mar 14 00:18:02.517475 systemd-networkd[1807]: cali397961f03fd: Link UP Mar 14 00:18:02.521720 systemd-networkd[1807]: cali397961f03fd: Gained carrier Mar 14 00:18:02.534635 systemd[1]: Started cri-containerd-07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610.scope - libcontainer container 07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610. Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:01.568 [INFO][5718] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0 csi-node-driver- calico-system 69ad5735-6879-4268-98e4-4112168b29a6 1076 0 2026-03-14 00:17:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-179 csi-node-driver-rwsn2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali397961f03fd [] [] }} ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:01.568 [INFO][5718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:01.808 [INFO][5770] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" HandleID="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Workload="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.014 [INFO][5770] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" HandleID="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Workload="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006199a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-179", "pod":"csi-node-driver-rwsn2", "timestamp":"2026-03-14 00:18:01.808070311 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000642420)} Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.014 [INFO][5770] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.072 [INFO][5770] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.072 [INFO][5770] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.113 [INFO][5770] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.176 [INFO][5770] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.237 [INFO][5770] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.290 [INFO][5770] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.361 [INFO][5770] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.361 [INFO][5770] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.398 [INFO][5770] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.453 [INFO][5770] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.486 [INFO][5770] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.71/26] block=192.168.20.64/26 handle="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.486 [INFO][5770] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.71/26] handle="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" host="ip-172-31-23-179" Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.486 [INFO][5770] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:02.629208 containerd[1999]: 2026-03-14 00:18:02.486 [INFO][5770] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.71/26] IPv6=[] ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" HandleID="k8s-pod-network.1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Workload="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.630928 containerd[1999]: 2026-03-14 00:18:02.509 [INFO][5718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69ad5735-6879-4268-98e4-4112168b29a6", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"csi-node-driver-rwsn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali397961f03fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:02.630928 containerd[1999]: 2026-03-14 00:18:02.511 [INFO][5718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.71/32] ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.630928 containerd[1999]: 2026-03-14 00:18:02.511 [INFO][5718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali397961f03fd ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.630928 containerd[1999]: 2026-03-14 00:18:02.520 [INFO][5718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.630928 containerd[1999]: 2026-03-14 00:18:02.527 [INFO][5718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69ad5735-6879-4268-98e4-4112168b29a6", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa", Pod:"csi-node-driver-rwsn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali397961f03fd", MAC:"a6:9f:eb:94:96:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:02.630928 containerd[1999]: 2026-03-14 00:18:02.603 [INFO][5718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa" Namespace="calico-system" Pod="csi-node-driver-rwsn2" WorkloadEndpoint="ip--172--31--23--179-k8s-csi--node--driver--rwsn2-eth0" Mar 14 00:18:02.657378 containerd[1999]: time="2026-03-14T00:18:02.649217339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:02.657378 containerd[1999]: time="2026-03-14T00:18:02.655072211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:02.657378 containerd[1999]: time="2026-03-14T00:18:02.655137991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:02.657378 containerd[1999]: time="2026-03-14T00:18:02.655271008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:02.729982 systemd[1]: Started cri-containerd-16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6.scope - libcontainer container 16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6. Mar 14 00:18:02.763760 systemd[1]: run-containerd-runc-k8s.io-16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6-runc.00wnAS.mount: Deactivated successfully. Mar 14 00:18:02.794373 containerd[1999]: time="2026-03-14T00:18:02.794315361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gmvx8,Uid:d60e0a0f-100e-4b16-8d9e-155aae6fac41,Namespace:kube-system,Attempt:1,} returns sandbox id \"07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610\"" Mar 14 00:18:02.801851 containerd[1999]: time="2026-03-14T00:18:02.800601456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:02.804764 containerd[1999]: time="2026-03-14T00:18:02.800714348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:02.804764 containerd[1999]: time="2026-03-14T00:18:02.802642021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:02.804764 containerd[1999]: time="2026-03-14T00:18:02.802973873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:02.818627 containerd[1999]: time="2026-03-14T00:18:02.817363110Z" level=info msg="CreateContainer within sandbox \"07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:18:02.867351 containerd[1999]: time="2026-03-14T00:18:02.867275176Z" level=info msg="CreateContainer within sandbox \"07c3fc53aed6bb08e44e7ac0024d50ed81fbbc34a1e8ef743d263a76dac5f610\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89b15b0c3639f89e02308185e5e59933815c61a44f245450b78d8e210a81deae\"" Mar 14 00:18:02.869556 containerd[1999]: time="2026-03-14T00:18:02.868357752Z" level=info msg="StartContainer for \"89b15b0c3639f89e02308185e5e59933815c61a44f245450b78d8e210a81deae\"" Mar 14 00:18:02.900605 systemd[1]: Started cri-containerd-1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa.scope - libcontainer container 1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa. Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.701 [INFO][5810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.701 [INFO][5810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" iface="eth0" netns="/var/run/netns/cni-a9a51c0a-7309-613d-4185-35bfcb4b8029" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.702 [INFO][5810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" iface="eth0" netns="/var/run/netns/cni-a9a51c0a-7309-613d-4185-35bfcb4b8029" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.704 [INFO][5810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" iface="eth0" netns="/var/run/netns/cni-a9a51c0a-7309-613d-4185-35bfcb4b8029" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.704 [INFO][5810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.704 [INFO][5810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.932 [INFO][5895] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" HandleID="k8s-pod-network.fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Workload="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.937 [INFO][5895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.937 [INFO][5895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.971 [WARNING][5895] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" HandleID="k8s-pod-network.fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Workload="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.971 [INFO][5895] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" HandleID="k8s-pod-network.fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Workload="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.979 [INFO][5895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:02.996414 containerd[1999]: 2026-03-14 00:18:02.992 [INFO][5810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8" Mar 14 00:18:02.998782 containerd[1999]: time="2026-03-14T00:18:02.998737397Z" level=info msg="TearDown network for sandbox \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\" successfully" Mar 14 00:18:02.998895 containerd[1999]: time="2026-03-14T00:18:02.998793832Z" level=info msg="StopPodSandbox for \"fd0dd3a729dbe1c9ed8dc6b92cae51bed81b06adf6dce9b0b6ae38fc6b80d0e8\" returns successfully" Mar 14 00:18:03.002483 containerd[1999]: time="2026-03-14T00:18:03.002379505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b6wbf,Uid:77dd6a86-ca74-4e28-b372-e452edb98c28,Namespace:calico-system,Attempt:1,}" Mar 14 00:18:03.016578 systemd[1]: Started cri-containerd-89b15b0c3639f89e02308185e5e59933815c61a44f245450b78d8e210a81deae.scope - libcontainer container 89b15b0c3639f89e02308185e5e59933815c61a44f245450b78d8e210a81deae. Mar 14 00:18:03.042060 containerd[1999]: time="2026-03-14T00:18:03.042017959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwsn2,Uid:69ad5735-6879-4268-98e4-4112168b29a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa\"" Mar 14 00:18:03.054544 systemd-networkd[1807]: caliebd05a362e7: Gained IPv6LL Mar 14 00:18:03.126680 containerd[1999]: time="2026-03-14T00:18:03.125947791Z" level=info msg="StartContainer for \"89b15b0c3639f89e02308185e5e59933815c61a44f245450b78d8e210a81deae\" returns successfully" Mar 14 00:18:03.141848 containerd[1999]: time="2026-03-14T00:18:03.141794339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7865fd67-f9cpm,Uid:254c54ac-8310-4004-8ce4-5125625b2db5,Namespace:calico-system,Attempt:1,} returns sandbox id \"16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6\"" Mar 14 00:18:03.415137 systemd-networkd[1807]: cali1e2a9482a9c: Link UP Mar 14 00:18:03.416844 systemd-networkd[1807]: cali1e2a9482a9c: Gained carrier Mar 14 00:18:03.438657 systemd-networkd[1807]: cali19bfee13a71: Gained IPv6LL Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.181 [INFO][5983] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0 goldmane-9f7667bb8- calico-system 77dd6a86-ca74-4e28-b372-e452edb98c28 1109 0 2026-03-14 00:17:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-179 goldmane-9f7667bb8-b6wbf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1e2a9482a9c [] [] }} ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.181 [INFO][5983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.305 [INFO][6012] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" HandleID="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Workload="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.319 [INFO][6012] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" HandleID="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Workload="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-179", "pod":"goldmane-9f7667bb8-b6wbf", "timestamp":"2026-03-14 00:18:03.305052021 +0000 UTC"}, Hostname:"ip-172-31-23-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000384c60)} Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.319 [INFO][6012] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.319 [INFO][6012] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.319 [INFO][6012] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-179' Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.325 [INFO][6012] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.334 [INFO][6012] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.343 [INFO][6012] ipam/ipam.go 526: Trying affinity for 192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.348 [INFO][6012] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.355 [INFO][6012] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.355 [INFO][6012] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.360 [INFO][6012] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51 Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.377 [INFO][6012] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.402 [INFO][6012] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.72/26] block=192.168.20.64/26 handle="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.403 [INFO][6012] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.72/26] handle="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" host="ip-172-31-23-179" Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.403 [INFO][6012] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:18:03.463350 containerd[1999]: 2026-03-14 00:18:03.403 [INFO][6012] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.72/26] IPv6=[] ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" HandleID="k8s-pod-network.5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Workload="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.464481 containerd[1999]: 2026-03-14 00:18:03.409 [INFO][5983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"77dd6a86-ca74-4e28-b372-e452edb98c28", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"", Pod:"goldmane-9f7667bb8-b6wbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e2a9482a9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:03.464481 containerd[1999]: 2026-03-14 00:18:03.409 [INFO][5983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.72/32] ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.464481 containerd[1999]: 2026-03-14 00:18:03.409 [INFO][5983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e2a9482a9c ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.464481 containerd[1999]: 2026-03-14 00:18:03.414 [INFO][5983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.464481 containerd[1999]: 2026-03-14 00:18:03.419 [INFO][5983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"77dd6a86-ca74-4e28-b372-e452edb98c28", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-179", ContainerID:"5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51", Pod:"goldmane-9f7667bb8-b6wbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e2a9482a9c", MAC:"f6:a4:52:8b:82:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:18:03.464481 containerd[1999]: 2026-03-14 00:18:03.448 [INFO][5983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51" Namespace="calico-system" Pod="goldmane-9f7667bb8-b6wbf" WorkloadEndpoint="ip--172--31--23--179-k8s-goldmane--9f7667bb8--b6wbf-eth0" Mar 14 00:18:03.523841 containerd[1999]: time="2026-03-14T00:18:03.521967672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:03.523841 containerd[1999]: time="2026-03-14T00:18:03.522047796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:03.523841 containerd[1999]: time="2026-03-14T00:18:03.522069361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:03.523841 containerd[1999]: time="2026-03-14T00:18:03.522186078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:03.555922 systemd[1]: run-netns-cni\x2da9a51c0a\x2d7309\x2d613d\x2d4185\x2d35bfcb4b8029.mount: Deactivated successfully. Mar 14 00:18:03.604274 systemd[1]: Started cri-containerd-5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51.scope - libcontainer container 5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51. Mar 14 00:18:03.727037 kubelet[3206]: I0314 00:18:03.723459 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gmvx8" podStartSLOduration=59.723436559 podStartE2EDuration="59.723436559s" podCreationTimestamp="2026-03-14 00:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:18:03.72285816 +0000 UTC m=+64.790281394" watchObservedRunningTime="2026-03-14 00:18:03.723436559 +0000 UTC m=+64.790859781" Mar 14 00:18:03.808228 containerd[1999]: time="2026-03-14T00:18:03.808183714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b6wbf,Uid:77dd6a86-ca74-4e28-b372-e452edb98c28,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51\"" Mar 14 00:18:04.271539 systemd-networkd[1807]: cali397961f03fd: Gained IPv6LL Mar 14 00:18:04.463087 systemd-networkd[1807]: cali1e2a9482a9c: Gained IPv6LL Mar 14 00:18:04.521750 containerd[1999]: time="2026-03-14T00:18:04.521698449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:04.523843 containerd[1999]: time="2026-03-14T00:18:04.523457159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:18:04.524822 containerd[1999]: time="2026-03-14T00:18:04.524764959Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:04.527827 containerd[1999]: time="2026-03-14T00:18:04.527782452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:04.529346 containerd[1999]: time="2026-03-14T00:18:04.528733457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 6.25716631s" Mar 14 00:18:04.529346 containerd[1999]: time="2026-03-14T00:18:04.528779346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:18:04.530941 containerd[1999]: time="2026-03-14T00:18:04.530727051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:18:04.646338 containerd[1999]: time="2026-03-14T00:18:04.646273141Z" level=info msg="CreateContainer within sandbox \"0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:18:04.666382 containerd[1999]: time="2026-03-14T00:18:04.665513212Z" level=info msg="CreateContainer within sandbox \"0aee146d67e1ecce9ea57fd0a7a80e3d79084f761e8ff6cd81baaa2696fca443\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7a73635a16020acc9afebdf03ec6519c8a4d7be4d2157abd7ae60fa51779ada1\"" Mar 14 00:18:04.686023 containerd[1999]: time="2026-03-14T00:18:04.684481236Z" level=info msg="StartContainer for \"7a73635a16020acc9afebdf03ec6519c8a4d7be4d2157abd7ae60fa51779ada1\"" Mar 14 00:18:04.794248 systemd[1]: Started cri-containerd-7a73635a16020acc9afebdf03ec6519c8a4d7be4d2157abd7ae60fa51779ada1.scope - libcontainer container 7a73635a16020acc9afebdf03ec6519c8a4d7be4d2157abd7ae60fa51779ada1. Mar 14 00:18:04.885192 containerd[1999]: time="2026-03-14T00:18:04.885145780Z" level=info msg="StartContainer for \"7a73635a16020acc9afebdf03ec6519c8a4d7be4d2157abd7ae60fa51779ada1\" returns successfully" Mar 14 00:18:05.849398 kubelet[3206]: I0314 00:18:05.849302 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b6c7c5dc8-27jg8" podStartSLOduration=39.588422097 podStartE2EDuration="45.849278851s" podCreationTimestamp="2026-03-14 00:17:20 +0000 UTC" firstStartedPulling="2026-03-14 00:17:58.269680695 +0000 UTC m=+59.337103898" lastFinishedPulling="2026-03-14 00:18:04.530537438 +0000 UTC m=+65.597960652" observedRunningTime="2026-03-14 00:18:05.845059652 +0000 UTC m=+66.912482900" watchObservedRunningTime="2026-03-14 00:18:05.849278851 +0000 UTC m=+66.916702073" Mar 14 00:18:06.037542 systemd[1]: Started sshd@11-172.31.23.179:22-68.220.241.50:39278.service - OpenSSH per-connection server daemon (68.220.241.50:39278). Mar 14 00:18:06.604759 ntpd[1955]: Listen normally on 8 vxlan.calico 192.168.20.64:123 Mar 14 00:18:06.604857 ntpd[1955]: Listen normally on 9 cali137a65b8352 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 8 vxlan.calico 192.168.20.64:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 9 cali137a65b8352 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 10 vxlan.calico [fe80::6417:ceff:fe5e:b2fc%5]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 11 cali78c80336f12 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 12 cali535ce32ec59 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 13 calib62384b8f23 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 14 caliebd05a362e7 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 15 cali19bfee13a71 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 16 cali397961f03fd [fe80::ecee:eeff:feee:eeee%13]:123 Mar 14 00:18:06.625470 ntpd[1955]: 14 Mar 00:18:06 ntpd[1955]: Listen normally on 17 cali1e2a9482a9c [fe80::ecee:eeff:feee:eeee%14]:123 Mar 14 00:18:06.604918 ntpd[1955]: Listen normally on 10 vxlan.calico [fe80::6417:ceff:fe5e:b2fc%5]:123 Mar 14 00:18:06.604964 ntpd[1955]: Listen normally on 11 cali78c80336f12 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:18:06.605007 ntpd[1955]: Listen normally on 12 cali535ce32ec59 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:18:06.605052 ntpd[1955]: Listen normally on 13 calib62384b8f23 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:18:06.605101 ntpd[1955]: Listen normally on 14 caliebd05a362e7 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:18:06.605139 ntpd[1955]: Listen normally on 15 cali19bfee13a71 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:18:06.605185 ntpd[1955]: Listen normally on 16 cali397961f03fd [fe80::ecee:eeff:feee:eeee%13]:123 Mar 14 00:18:06.605222 ntpd[1955]: Listen normally on 17 cali1e2a9482a9c [fe80::ecee:eeff:feee:eeee%14]:123 Mar 14 00:18:06.698466 sshd[6182]: Accepted publickey for core from 68.220.241.50 port 39278 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:06.730391 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:06.758433 systemd-logind[1962]: New session 12 of user core. Mar 14 00:18:06.762723 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:18:08.407688 sshd[6182]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:08.414847 systemd[1]: sshd@11-172.31.23.179:22-68.220.241.50:39278.service: Deactivated successfully. Mar 14 00:18:08.418487 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:18:08.431640 systemd-logind[1962]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:18:08.436244 systemd-logind[1962]: Removed session 12. Mar 14 00:18:08.515059 systemd[1]: Started sshd@12-172.31.23.179:22-68.220.241.50:39280.service - OpenSSH per-connection server daemon (68.220.241.50:39280). Mar 14 00:18:09.129405 sshd[6220]: Accepted publickey for core from 68.220.241.50 port 39280 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:09.149037 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:09.181622 systemd-logind[1962]: New session 13 of user core. Mar 14 00:18:09.189559 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:18:09.771422 containerd[1999]: time="2026-03-14T00:18:09.770987863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:09.777939 containerd[1999]: time="2026-03-14T00:18:09.775824670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:18:09.900083 containerd[1999]: time="2026-03-14T00:18:09.899898412Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:09.904215 containerd[1999]: time="2026-03-14T00:18:09.904165658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:09.905152 containerd[1999]: time="2026-03-14T00:18:09.905108165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.374340751s" Mar 14 00:18:09.905249 containerd[1999]: time="2026-03-14T00:18:09.905158030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:18:09.925423 containerd[1999]: time="2026-03-14T00:18:09.925374428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:18:09.968928 sshd[6220]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:09.986399 systemd[1]: sshd@12-172.31.23.179:22-68.220.241.50:39280.service: Deactivated successfully. Mar 14 00:18:09.991803 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:18:09.994434 systemd-logind[1962]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:18:09.998974 systemd-logind[1962]: Removed session 13. Mar 14 00:18:10.060984 systemd[1]: Started sshd@13-172.31.23.179:22-68.220.241.50:39288.service - OpenSSH per-connection server daemon (68.220.241.50:39288). Mar 14 00:18:10.130063 containerd[1999]: time="2026-03-14T00:18:10.129911806Z" level=info msg="CreateContainer within sandbox \"50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:18:10.148239 containerd[1999]: time="2026-03-14T00:18:10.148129784Z" level=info msg="CreateContainer within sandbox \"50ba5385efe210fd2b31c48d2d8fd522a7435c82058087ed3b3160c1997b873f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcacbc1cfa5b77fcd29d4d03d27c9d98a3ebe8d5e1572f35f73c43455e226422\"" Mar 14 00:18:10.155000 containerd[1999]: time="2026-03-14T00:18:10.154612802Z" level=info msg="StartContainer for \"dcacbc1cfa5b77fcd29d4d03d27c9d98a3ebe8d5e1572f35f73c43455e226422\"" Mar 14 00:18:10.248538 systemd[1]: Started cri-containerd-dcacbc1cfa5b77fcd29d4d03d27c9d98a3ebe8d5e1572f35f73c43455e226422.scope - libcontainer container dcacbc1cfa5b77fcd29d4d03d27c9d98a3ebe8d5e1572f35f73c43455e226422. Mar 14 00:18:10.345806 containerd[1999]: time="2026-03-14T00:18:10.345656727Z" level=info msg="StartContainer for \"dcacbc1cfa5b77fcd29d4d03d27c9d98a3ebe8d5e1572f35f73c43455e226422\" returns successfully" Mar 14 00:18:10.601009 sshd[6244]: Accepted publickey for core from 68.220.241.50 port 39288 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:10.603388 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:10.609772 systemd-logind[1962]: New session 14 of user core. Mar 14 00:18:10.617563 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:18:11.183190 sshd[6244]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:11.195231 systemd[1]: sshd@13-172.31.23.179:22-68.220.241.50:39288.service: Deactivated successfully. Mar 14 00:18:11.200178 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:18:11.204300 systemd-logind[1962]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:18:11.206110 systemd-logind[1962]: Removed session 14. Mar 14 00:18:11.354034 kubelet[3206]: I0314 00:18:11.315523 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b7865fd67-6tjs6" podStartSLOduration=43.635653531 podStartE2EDuration="52.306829787s" podCreationTimestamp="2026-03-14 00:17:19 +0000 UTC" firstStartedPulling="2026-03-14 00:18:01.249296548 +0000 UTC m=+62.316719751" lastFinishedPulling="2026-03-14 00:18:09.920472788 +0000 UTC m=+70.987896007" observedRunningTime="2026-03-14 00:18:11.279101871 +0000 UTC m=+72.346525093" watchObservedRunningTime="2026-03-14 00:18:11.306829787 +0000 UTC m=+72.374253011" Mar 14 00:18:11.653639 containerd[1999]: time="2026-03-14T00:18:11.653590708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:11.656518 containerd[1999]: time="2026-03-14T00:18:11.656460289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:18:11.657522 containerd[1999]: time="2026-03-14T00:18:11.657402816Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:11.662684 containerd[1999]: time="2026-03-14T00:18:11.662515174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:11.663398 containerd[1999]: time="2026-03-14T00:18:11.663272948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.737853077s" Mar 14 00:18:11.663505 containerd[1999]: time="2026-03-14T00:18:11.663410493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:18:11.666199 containerd[1999]: time="2026-03-14T00:18:11.665712531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:18:11.732597 containerd[1999]: time="2026-03-14T00:18:11.732508159Z" level=info msg="CreateContainer within sandbox \"1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:18:11.782511 containerd[1999]: time="2026-03-14T00:18:11.782402505Z" level=info msg="CreateContainer within sandbox \"1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4454c4b66199df80d975f59efad59681cd74f28e74c6755b51b2ac0075ee5e2b\"" Mar 14 00:18:11.793288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264403052.mount: Deactivated successfully. Mar 14 00:18:11.794193 containerd[1999]: time="2026-03-14T00:18:11.793610663Z" level=info msg="StartContainer for \"4454c4b66199df80d975f59efad59681cd74f28e74c6755b51b2ac0075ee5e2b\"" Mar 14 00:18:11.863622 systemd[1]: Started cri-containerd-4454c4b66199df80d975f59efad59681cd74f28e74c6755b51b2ac0075ee5e2b.scope - libcontainer container 4454c4b66199df80d975f59efad59681cd74f28e74c6755b51b2ac0075ee5e2b. Mar 14 00:18:11.901694 containerd[1999]: time="2026-03-14T00:18:11.901512376Z" level=info msg="StartContainer for \"4454c4b66199df80d975f59efad59681cd74f28e74c6755b51b2ac0075ee5e2b\" returns successfully" Mar 14 00:18:12.042034 containerd[1999]: time="2026-03-14T00:18:12.041905532Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:12.044341 containerd[1999]: time="2026-03-14T00:18:12.044261895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:18:12.047301 containerd[1999]: time="2026-03-14T00:18:12.047238469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 381.473328ms" Mar 14 00:18:12.047301 containerd[1999]: time="2026-03-14T00:18:12.047303726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:18:12.048786 containerd[1999]: time="2026-03-14T00:18:12.048750773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:18:12.056431 containerd[1999]: time="2026-03-14T00:18:12.056385334Z" level=info msg="CreateContainer within sandbox \"16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:18:12.085951 containerd[1999]: time="2026-03-14T00:18:12.085904153Z" level=info msg="CreateContainer within sandbox \"16453f3dca06007030e018f02e19ec51262eb6aa15848bbf179e39a8bcf8c3e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3c37f563ba9aba79dc4485e7e89b879e555ab1ee250cc23b27f781f777b679b6\"" Mar 14 00:18:12.087540 containerd[1999]: time="2026-03-14T00:18:12.086913885Z" level=info msg="StartContainer for \"3c37f563ba9aba79dc4485e7e89b879e555ab1ee250cc23b27f781f777b679b6\"" Mar 14 00:18:12.137624 systemd[1]: Started cri-containerd-3c37f563ba9aba79dc4485e7e89b879e555ab1ee250cc23b27f781f777b679b6.scope - libcontainer container 3c37f563ba9aba79dc4485e7e89b879e555ab1ee250cc23b27f781f777b679b6. Mar 14 00:18:12.206791 containerd[1999]: time="2026-03-14T00:18:12.206747679Z" level=info msg="StartContainer for \"3c37f563ba9aba79dc4485e7e89b879e555ab1ee250cc23b27f781f777b679b6\" returns successfully" Mar 14 00:18:13.770367 kubelet[3206]: I0314 00:18:13.769941 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b7865fd67-f9cpm" podStartSLOduration=45.866164928 podStartE2EDuration="54.769738421s" podCreationTimestamp="2026-03-14 00:17:19 +0000 UTC" firstStartedPulling="2026-03-14 00:18:03.144927801 +0000 UTC m=+64.212351017" lastFinishedPulling="2026-03-14 00:18:12.048501297 +0000 UTC m=+73.115924510" observedRunningTime="2026-03-14 00:18:13.149293389 +0000 UTC m=+74.216716612" watchObservedRunningTime="2026-03-14 00:18:13.769738421 +0000 UTC m=+74.837161642" Mar 14 00:18:16.176218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488025147.mount: Deactivated successfully. Mar 14 00:18:16.313748 systemd[1]: Started sshd@14-172.31.23.179:22-68.220.241.50:42288.service - OpenSSH per-connection server daemon (68.220.241.50:42288). Mar 14 00:18:17.058725 containerd[1999]: time="2026-03-14T00:18:17.058668045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:17.129182 containerd[1999]: time="2026-03-14T00:18:17.080922242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:18:17.155488 sshd[6411]: Accepted publickey for core from 68.220.241.50 port 42288 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:17.160249 sshd[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:17.175393 systemd-logind[1962]: New session 15 of user core. Mar 14 00:18:17.181531 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:18:17.186021 containerd[1999]: time="2026-03-14T00:18:17.185982539Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:17.192536 containerd[1999]: time="2026-03-14T00:18:17.192156741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:17.236280 containerd[1999]: time="2026-03-14T00:18:17.235868897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.187056535s" Mar 14 00:18:17.236280 containerd[1999]: time="2026-03-14T00:18:17.235952324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:18:17.363151 containerd[1999]: time="2026-03-14T00:18:17.363109298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:18:17.517784 containerd[1999]: time="2026-03-14T00:18:17.517061889Z" level=info msg="CreateContainer within sandbox \"5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:18:17.849536 containerd[1999]: time="2026-03-14T00:18:17.849388492Z" level=info msg="CreateContainer within sandbox \"5c71ec5b09bf28edfecb155aed26422a902e1994eab093c3dd9eef45dd56ae51\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00\"" Mar 14 00:18:17.886701 containerd[1999]: time="2026-03-14T00:18:17.886614578Z" level=info msg="StartContainer for \"efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00\"" Mar 14 00:18:18.347154 systemd[1]: run-containerd-runc-k8s.io-efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00-runc.qLohn2.mount: Deactivated successfully. Mar 14 00:18:18.364621 systemd[1]: Started cri-containerd-efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00.scope - libcontainer container efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00. Mar 14 00:18:18.522077 containerd[1999]: time="2026-03-14T00:18:18.522029592Z" level=info msg="StartContainer for \"efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00\" returns successfully" Mar 14 00:18:19.248769 sshd[6411]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:19.267546 systemd[1]: sshd@14-172.31.23.179:22-68.220.241.50:42288.service: Deactivated successfully. Mar 14 00:18:19.274782 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:18:19.285129 systemd-logind[1962]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:18:19.292151 systemd-logind[1962]: Removed session 15. Mar 14 00:18:19.345698 systemd[1]: Started sshd@15-172.31.23.179:22-68.220.241.50:42296.service - OpenSSH per-connection server daemon (68.220.241.50:42296). Mar 14 00:18:19.907359 containerd[1999]: time="2026-03-14T00:18:19.906199242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:19.911100 containerd[1999]: time="2026-03-14T00:18:19.911022722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:18:19.945723 containerd[1999]: time="2026-03-14T00:18:19.945686380Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:19.976385 containerd[1999]: time="2026-03-14T00:18:19.976340243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:18:19.991353 containerd[1999]: time="2026-03-14T00:18:19.991016499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.627518645s" Mar 14 00:18:19.991353 containerd[1999]: time="2026-03-14T00:18:19.991074442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:18:19.997987 sshd[6472]: Accepted publickey for core from 68.220.241.50 port 42296 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:20.016008 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:20.030908 systemd[1]: run-containerd-runc-k8s.io-efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00-runc.VBTSaI.mount: Deactivated successfully. Mar 14 00:18:20.051435 containerd[1999]: time="2026-03-14T00:18:20.050800551Z" level=info msg="CreateContainer within sandbox \"1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:18:20.065674 systemd-logind[1962]: New session 16 of user core. Mar 14 00:18:20.070600 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:18:20.138981 containerd[1999]: time="2026-03-14T00:18:20.137575631Z" level=info msg="CreateContainer within sandbox \"1bf22cc7e7d30e7f08ba483f704c43ac68527e1913e2fd65512b01dabe8189fa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"efe6f3c96696bf77a5e22ad251093db5d9d85632591cd43faa0dc66c3300138e\"" Mar 14 00:18:20.143698 containerd[1999]: time="2026-03-14T00:18:20.143655949Z" level=info msg="StartContainer for \"efe6f3c96696bf77a5e22ad251093db5d9d85632591cd43faa0dc66c3300138e\"" Mar 14 00:18:20.151369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610097593.mount: Deactivated successfully. Mar 14 00:18:20.290714 systemd[1]: Started cri-containerd-efe6f3c96696bf77a5e22ad251093db5d9d85632591cd43faa0dc66c3300138e.scope - libcontainer container efe6f3c96696bf77a5e22ad251093db5d9d85632591cd43faa0dc66c3300138e. Mar 14 00:18:20.388094 containerd[1999]: time="2026-03-14T00:18:20.388048266Z" level=info msg="StartContainer for \"efe6f3c96696bf77a5e22ad251093db5d9d85632591cd43faa0dc66c3300138e\" returns successfully" Mar 14 00:18:21.169353 kubelet[3206]: I0314 00:18:21.051479 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-rwsn2" podStartSLOduration=44.058634618 podStartE2EDuration="1m0.976827916s" podCreationTimestamp="2026-03-14 00:17:20 +0000 UTC" firstStartedPulling="2026-03-14 00:18:03.077571044 +0000 UTC m=+64.144994257" lastFinishedPulling="2026-03-14 00:18:19.995764343 +0000 UTC m=+81.063187555" observedRunningTime="2026-03-14 00:18:20.92749941 +0000 UTC m=+81.994922643" watchObservedRunningTime="2026-03-14 00:18:20.976827916 +0000 UTC m=+82.044251137" Mar 14 00:18:21.169353 kubelet[3206]: I0314 00:18:21.168807 3206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-b6wbf" podStartSLOduration=48.641367337 podStartE2EDuration="1m2.168787726s" podCreationTimestamp="2026-03-14 00:17:19 +0000 UTC" firstStartedPulling="2026-03-14 00:18:03.814092429 +0000 UTC m=+64.881515646" lastFinishedPulling="2026-03-14 00:18:17.341512813 +0000 UTC m=+78.408936035" observedRunningTime="2026-03-14 00:18:20.022613635 +0000 UTC m=+81.090036855" watchObservedRunningTime="2026-03-14 00:18:21.168787726 +0000 UTC m=+82.236210950" Mar 14 00:18:21.637689 kubelet[3206]: I0314 00:18:21.635438 3206 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:18:21.643103 kubelet[3206]: I0314 00:18:21.642526 3206 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:18:24.111558 sshd[6472]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:24.118043 systemd[1]: sshd@15-172.31.23.179:22-68.220.241.50:42296.service: Deactivated successfully. Mar 14 00:18:24.120681 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:18:24.121824 systemd-logind[1962]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:18:24.123707 systemd-logind[1962]: Removed session 16. Mar 14 00:18:24.199829 systemd[1]: Started sshd@16-172.31.23.179:22-68.220.241.50:56826.service - OpenSSH per-connection server daemon (68.220.241.50:56826). Mar 14 00:18:24.761793 sshd[6624]: Accepted publickey for core from 68.220.241.50 port 56826 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:24.764105 sshd[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:24.770757 systemd-logind[1962]: New session 17 of user core. Mar 14 00:18:24.776603 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:18:26.392094 sshd[6624]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:26.406134 systemd[1]: sshd@16-172.31.23.179:22-68.220.241.50:56826.service: Deactivated successfully. Mar 14 00:18:26.411534 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:18:26.413888 systemd-logind[1962]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:18:26.416286 systemd-logind[1962]: Removed session 17. Mar 14 00:18:26.471753 systemd[1]: Started sshd@17-172.31.23.179:22-68.220.241.50:56840.service - OpenSSH per-connection server daemon (68.220.241.50:56840). Mar 14 00:18:27.017224 sshd[6650]: Accepted publickey for core from 68.220.241.50 port 56840 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:27.019772 sshd[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:27.028487 systemd-logind[1962]: New session 18 of user core. Mar 14 00:18:27.032614 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:18:28.523763 sshd[6650]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:28.528167 systemd[1]: sshd@17-172.31.23.179:22-68.220.241.50:56840.service: Deactivated successfully. Mar 14 00:18:28.531552 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:18:28.533439 systemd-logind[1962]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:18:28.535063 systemd-logind[1962]: Removed session 18. Mar 14 00:18:28.624986 systemd[1]: Started sshd@18-172.31.23.179:22-68.220.241.50:56844.service - OpenSSH per-connection server daemon (68.220.241.50:56844). Mar 14 00:18:29.181766 sshd[6663]: Accepted publickey for core from 68.220.241.50 port 56844 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:29.182491 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:29.188478 systemd-logind[1962]: New session 19 of user core. Mar 14 00:18:29.190577 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:18:29.782590 sshd[6663]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:29.786744 systemd[1]: sshd@18-172.31.23.179:22-68.220.241.50:56844.service: Deactivated successfully. Mar 14 00:18:29.789219 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:18:29.795676 systemd-logind[1962]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:18:29.797397 systemd-logind[1962]: Removed session 19. Mar 14 00:18:34.872702 systemd[1]: Started sshd@19-172.31.23.179:22-68.220.241.50:57330.service - OpenSSH per-connection server daemon (68.220.241.50:57330). Mar 14 00:18:35.411894 sshd[6682]: Accepted publickey for core from 68.220.241.50 port 57330 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:35.413575 sshd[6682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:35.421093 systemd-logind[1962]: New session 20 of user core. Mar 14 00:18:35.426550 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:18:35.911784 systemd[1]: run-containerd-runc-k8s.io-7a73635a16020acc9afebdf03ec6519c8a4d7be4d2157abd7ae60fa51779ada1-runc.YP0kGx.mount: Deactivated successfully. Mar 14 00:18:36.469717 sshd[6682]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:36.475997 systemd[1]: sshd@19-172.31.23.179:22-68.220.241.50:57330.service: Deactivated successfully. Mar 14 00:18:36.478541 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:18:36.480458 systemd-logind[1962]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:18:36.482156 systemd-logind[1962]: Removed session 20. Mar 14 00:18:38.613351 systemd[1]: run-containerd-runc-k8s.io-efa15e51fce983cf7db35f4491a0d243eb7a6a769a15cf3a6c79c69dffd23a00-runc.Fw4WYb.mount: Deactivated successfully. Mar 14 00:18:41.562216 systemd[1]: Started sshd@20-172.31.23.179:22-68.220.241.50:57336.service - OpenSSH per-connection server daemon (68.220.241.50:57336). Mar 14 00:18:42.167680 sshd[6750]: Accepted publickey for core from 68.220.241.50 port 57336 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:42.175191 sshd[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:42.184918 systemd-logind[1962]: New session 21 of user core. Mar 14 00:18:42.191638 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:18:43.460628 sshd[6750]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:43.467003 systemd-logind[1962]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:18:43.468885 systemd[1]: sshd@20-172.31.23.179:22-68.220.241.50:57336.service: Deactivated successfully. Mar 14 00:18:43.473023 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:18:43.478262 systemd-logind[1962]: Removed session 21. Mar 14 00:18:48.550669 systemd[1]: Started sshd@21-172.31.23.179:22-68.220.241.50:56720.service - OpenSSH per-connection server daemon (68.220.241.50:56720). Mar 14 00:18:49.049621 sshd[6768]: Accepted publickey for core from 68.220.241.50 port 56720 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:49.051269 sshd[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:49.057030 systemd-logind[1962]: New session 22 of user core. Mar 14 00:18:49.063619 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:18:49.493496 sshd[6768]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:49.498195 systemd[1]: sshd@21-172.31.23.179:22-68.220.241.50:56720.service: Deactivated successfully. Mar 14 00:18:49.500840 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:18:49.501684 systemd-logind[1962]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:18:49.503181 systemd-logind[1962]: Removed session 22. Mar 14 00:18:54.581717 systemd[1]: Started sshd@22-172.31.23.179:22-68.220.241.50:38520.service - OpenSSH per-connection server daemon (68.220.241.50:38520). Mar 14 00:18:55.135443 sshd[6825]: Accepted publickey for core from 68.220.241.50 port 38520 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:18:55.137819 sshd[6825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:55.143392 systemd-logind[1962]: New session 23 of user core. Mar 14 00:18:55.151635 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:18:55.832782 sshd[6825]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:55.837489 systemd-logind[1962]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:18:55.838631 systemd[1]: sshd@22-172.31.23.179:22-68.220.241.50:38520.service: Deactivated successfully. Mar 14 00:18:55.841135 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:18:55.842691 systemd-logind[1962]: Removed session 23. Mar 14 00:19:01.213218 containerd[1999]: time="2026-03-14T00:19:01.197205337Z" level=info msg="StopPodSandbox for \"165bfd711e9d089ce82d9f56eb2fe5a1a346629ea9cd475936992f19007a8e66\"" Mar 14 00:19:10.972743 systemd[1]: cri-containerd-b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad.scope: Deactivated successfully. Mar 14 00:19:10.973044 systemd[1]: cri-containerd-b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad.scope: Consumed 9.569s CPU time. Mar 14 00:19:11.258588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad-rootfs.mount: Deactivated successfully. Mar 14 00:19:11.310212 systemd[1]: cri-containerd-57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459.scope: Deactivated successfully. Mar 14 00:19:11.310994 systemd[1]: cri-containerd-57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459.scope: Consumed 3.254s CPU time, 16.2M memory peak, 0B memory swap peak. Mar 14 00:19:11.368467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459-rootfs.mount: Deactivated successfully. Mar 14 00:19:11.387531 containerd[1999]: time="2026-03-14T00:19:11.317247528Z" level=info msg="shim disconnected" id=b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad namespace=k8s.io Mar 14 00:19:11.388124 containerd[1999]: time="2026-03-14T00:19:11.387546287Z" level=warning msg="cleaning up after shim disconnected" id=b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad namespace=k8s.io Mar 14 00:19:11.388124 containerd[1999]: time="2026-03-14T00:19:11.362767848Z" level=info msg="shim disconnected" id=57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459 namespace=k8s.io Mar 14 00:19:11.388124 containerd[1999]: time="2026-03-14T00:19:11.387832784Z" level=warning msg="cleaning up after shim disconnected" id=57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459 namespace=k8s.io Mar 14 00:19:11.393720 containerd[1999]: time="2026-03-14T00:19:11.392586434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:19:11.393877 containerd[1999]: time="2026-03-14T00:19:11.393822791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:19:11.508181 containerd[1999]: time="2026-03-14T00:19:11.508085131Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:19:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:19:12.013889 kubelet[3206]: E0314 00:19:12.005932 3206 controller.go:251] "Failed to update lease" err="Put \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:19:12.201042 kubelet[3206]: I0314 00:19:12.200741 3206 scope.go:122] "RemoveContainer" containerID="57f4cfac775e4ec40dc7a531242be5cdfe2d2480b0bb9bca857a4ea9d685e459" Mar 14 00:19:12.201042 kubelet[3206]: I0314 00:19:12.200956 3206 scope.go:122] "RemoveContainer" containerID="b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad" Mar 14 00:19:12.349243 containerd[1999]: time="2026-03-14T00:19:12.349182949Z" level=info msg="CreateContainer within sandbox \"d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:19:12.389292 containerd[1999]: time="2026-03-14T00:19:12.389239430Z" level=info msg="CreateContainer within sandbox \"590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:19:12.520198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964110203.mount: Deactivated successfully. Mar 14 00:19:12.533447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714545674.mount: Deactivated successfully. Mar 14 00:19:12.534120 containerd[1999]: time="2026-03-14T00:19:12.533845432Z" level=info msg="CreateContainer within sandbox \"d642913e94cbd57bafb6e58c05439c8a821654426709bccd69f3ca437aee1bd7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8\"" Mar 14 00:19:12.538784 containerd[1999]: time="2026-03-14T00:19:12.538750004Z" level=info msg="StartContainer for \"a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8\"" Mar 14 00:19:12.544512 containerd[1999]: time="2026-03-14T00:19:12.544235217Z" level=info msg="CreateContainer within sandbox \"590d29b6048b1ec1f8648d5bdbbbb6263c67aa3adf761e676ea7ae5a729eaca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b96b82eaa58f685e2c11a906733f256da820e16317e529501b2dda5ecc4ce2a3\"" Mar 14 00:19:12.545566 containerd[1999]: time="2026-03-14T00:19:12.545534338Z" level=info msg="StartContainer for \"b96b82eaa58f685e2c11a906733f256da820e16317e529501b2dda5ecc4ce2a3\"" Mar 14 00:19:12.603875 systemd[1]: Started cri-containerd-b96b82eaa58f685e2c11a906733f256da820e16317e529501b2dda5ecc4ce2a3.scope - libcontainer container b96b82eaa58f685e2c11a906733f256da820e16317e529501b2dda5ecc4ce2a3. Mar 14 00:19:12.627799 systemd[1]: Started cri-containerd-a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8.scope - libcontainer container a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8. Mar 14 00:19:12.717776 containerd[1999]: time="2026-03-14T00:19:12.717735297Z" level=info msg="StartContainer for \"b96b82eaa58f685e2c11a906733f256da820e16317e529501b2dda5ecc4ce2a3\" returns successfully" Mar 14 00:19:12.718628 containerd[1999]: time="2026-03-14T00:19:12.718519728Z" level=info msg="StartContainer for \"a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8\" returns successfully" Mar 14 00:19:15.636646 systemd[1]: cri-containerd-1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16.scope: Deactivated successfully. Mar 14 00:19:15.638646 systemd[1]: cri-containerd-1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16.scope: Consumed 1.448s CPU time, 15.8M memory peak, 0B memory swap peak. Mar 14 00:19:15.685296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16-rootfs.mount: Deactivated successfully. Mar 14 00:19:15.691041 containerd[1999]: time="2026-03-14T00:19:15.681627456Z" level=info msg="shim disconnected" id=1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16 namespace=k8s.io Mar 14 00:19:15.691041 containerd[1999]: time="2026-03-14T00:19:15.681711363Z" level=warning msg="cleaning up after shim disconnected" id=1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16 namespace=k8s.io Mar 14 00:19:15.691041 containerd[1999]: time="2026-03-14T00:19:15.681725408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:19:16.205839 kubelet[3206]: I0314 00:19:16.205800 3206 scope.go:122] "RemoveContainer" containerID="1db2410a2fb3e53745a11a0aa05598b5506389ed512d2e733d4e15bfcdfb7d16" Mar 14 00:19:16.208441 containerd[1999]: time="2026-03-14T00:19:16.208401390Z" level=info msg="CreateContainer within sandbox \"76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:19:16.237656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046639917.mount: Deactivated successfully. Mar 14 00:19:16.243237 containerd[1999]: time="2026-03-14T00:19:16.243184488Z" level=info msg="CreateContainer within sandbox \"76daf3a6b6f3ece3cce01351643d4f9cca6d241836127dfaf216f449bcc9a149\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0826a5be98f224e2ebca76257568c6f9617e1e606d27e5d849ada86d5d3441a8\"" Mar 14 00:19:16.244091 containerd[1999]: time="2026-03-14T00:19:16.244057687Z" level=info msg="StartContainer for \"0826a5be98f224e2ebca76257568c6f9617e1e606d27e5d849ada86d5d3441a8\"" Mar 14 00:19:16.294530 systemd[1]: Started cri-containerd-0826a5be98f224e2ebca76257568c6f9617e1e606d27e5d849ada86d5d3441a8.scope - libcontainer container 0826a5be98f224e2ebca76257568c6f9617e1e606d27e5d849ada86d5d3441a8. Mar 14 00:19:16.354872 containerd[1999]: time="2026-03-14T00:19:16.354663106Z" level=info msg="StartContainer for \"0826a5be98f224e2ebca76257568c6f9617e1e606d27e5d849ada86d5d3441a8\" returns successfully" Mar 14 00:19:16.684463 systemd[1]: run-containerd-runc-k8s.io-0826a5be98f224e2ebca76257568c6f9617e1e606d27e5d849ada86d5d3441a8-runc.qN7yDV.mount: Deactivated successfully. Mar 14 00:19:22.015291 kubelet[3206]: E0314 00:19:22.014721 3206 controller.go:251] "Failed to update lease" err="Put \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:19:26.611419 systemd[1]: cri-containerd-a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8.scope: Deactivated successfully. Mar 14 00:19:26.639801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8-rootfs.mount: Deactivated successfully. Mar 14 00:19:26.651982 containerd[1999]: time="2026-03-14T00:19:26.651903009Z" level=info msg="shim disconnected" id=a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8 namespace=k8s.io Mar 14 00:19:26.651982 containerd[1999]: time="2026-03-14T00:19:26.651976448Z" level=warning msg="cleaning up after shim disconnected" id=a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8 namespace=k8s.io Mar 14 00:19:26.651982 containerd[1999]: time="2026-03-14T00:19:26.651988005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:19:27.238172 kubelet[3206]: I0314 00:19:27.238136 3206 scope.go:122] "RemoveContainer" containerID="a7835fda452557687ded5fae6e7780b8721fc1292903056da53a5fe61b8d8be8" Mar 14 00:19:27.247401 kubelet[3206]: E0314 00:19:27.247316 3206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-kd78k_tigera-operator(27977bf9-2de0-45bb-a582-e37a3728480b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-kd78k" podUID="27977bf9-2de0-45bb-a582-e37a3728480b" Mar 14 00:19:27.258409 kubelet[3206]: I0314 00:19:27.258362 3206 scope.go:122] "RemoveContainer" containerID="b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad" Mar 14 00:19:27.311338 containerd[1999]: time="2026-03-14T00:19:27.311278339Z" level=info msg="RemoveContainer for \"b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad\"" Mar 14 00:19:27.326800 containerd[1999]: time="2026-03-14T00:19:27.326743128Z" level=info msg="RemoveContainer for \"b9a020455b29e4c7ab8d4af4306bb31bc47279207d524e8c3d21a2b8ef6f38ad\" returns successfully" Mar 14 00:19:32.035035 kubelet[3206]: E0314 00:19:32.034542 3206 controller.go:251] "Failed to update lease" err="Put \"https://172.31.23.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-179?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"