Mar 7 01:15:03.503789 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:15:03.503832 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:03.503855 kernel: BIOS-provided physical RAM map: Mar 7 01:15:03.503868 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:15:03.503880 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 7 01:15:03.503893 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 7 01:15:03.503910 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 7 01:15:03.503922 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 7 01:15:03.503935 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 7 01:15:03.503951 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 7 01:15:03.503964 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 7 01:15:03.503977 kernel: NX (Execute Disable) protection: active Mar 7 01:15:03.503990 kernel: APIC: Static calls initialized Mar 7 01:15:03.504003 kernel: efi: EFI v2.7 by EDK II Mar 7 01:15:03.504020 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 7 01:15:03.504038 kernel: SMBIOS 2.7 present. Mar 7 01:15:03.504051 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 7 01:15:03.504066 kernel: Hypervisor detected: KVM Mar 7 01:15:03.504080 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:15:03.504094 kernel: kvm-clock: using sched offset of 3851905005 cycles Mar 7 01:15:03.504110 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:15:03.504125 kernel: tsc: Detected 2499.998 MHz processor Mar 7 01:15:03.504139 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:15:03.504155 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:15:03.504169 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 7 01:15:03.504187 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:15:03.504201 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:15:03.504215 kernel: Using GB pages for direct mapping Mar 7 01:15:03.504230 kernel: Secure boot disabled Mar 7 01:15:03.504245 kernel: ACPI: Early table checksum verification disabled Mar 7 01:15:03.504259 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 7 01:15:03.504274 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 01:15:03.504289 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 01:15:03.504303 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 01:15:03.504321 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 7 01:15:03.504336 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 7 01:15:03.504351 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 01:15:03.504365 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 01:15:03.504380 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 7 01:15:03.504395 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 7 01:15:03.504416 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:15:03.504434 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:15:03.504449 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 7 01:15:03.504465 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 7 01:15:03.504481 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 7 01:15:03.504497 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 7 01:15:03.504512 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 7 01:15:03.504528 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 7 01:15:03.504547 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 7 01:15:03.504562 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 7 01:15:03.504578 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 7 01:15:03.504593 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 7 01:15:03.504609 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 7 01:15:03.504624 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 7 01:15:03.504639 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:15:03.504655 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:15:03.504671 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 7 01:15:03.504689 kernel: NUMA: Initialized distance table, cnt=1 Mar 7 01:15:03.504704 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 7 01:15:03.504720 kernel: Zone ranges: Mar 7 01:15:03.506786 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:15:03.506809 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 7 01:15:03.506826 kernel: Normal empty Mar 7 01:15:03.506842 kernel: Movable zone start for each node Mar 7 01:15:03.506858 kernel: Early memory node ranges Mar 7 01:15:03.506873 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:15:03.506895 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 7 01:15:03.506911 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 7 01:15:03.506926 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 7 01:15:03.506941 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:15:03.506957 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:15:03.506972 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:15:03.506990 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 7 01:15:03.507007 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:15:03.507025 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:15:03.507043 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 7 01:15:03.507065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:15:03.507082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:15:03.507099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:15:03.507117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:15:03.507135 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:15:03.507152 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:15:03.507170 kernel: TSC deadline timer available Mar 7 01:15:03.507188 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:15:03.507205 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:15:03.507226 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 7 01:15:03.507244 kernel: Booting paravirtualized kernel on KVM Mar 7 01:15:03.507262 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:15:03.507280 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:15:03.507297 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:15:03.507316 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:15:03.507333 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:15:03.507350 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:15:03.507368 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:15:03.507392 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:03.507410 kernel: random: crng init done Mar 7 01:15:03.507428 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:15:03.507445 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:15:03.507462 kernel: Fallback order for Node 0: 0 Mar 7 01:15:03.507480 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 7 01:15:03.507497 kernel: Policy zone: DMA32 Mar 7 01:15:03.507514 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:15:03.507536 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 7 01:15:03.507554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:15:03.507571 kernel: Kernel/User page tables isolation: enabled Mar 7 01:15:03.507589 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:15:03.507606 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:15:03.507623 kernel: Dynamic Preempt: voluntary Mar 7 01:15:03.507662 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:15:03.507681 kernel: rcu: RCU event tracing is enabled. Mar 7 01:15:03.507699 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:15:03.507721 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:15:03.507752 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:15:03.507764 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:15:03.507777 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:15:03.507788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:15:03.507800 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:15:03.507813 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:15:03.507840 kernel: Console: colour dummy device 80x25 Mar 7 01:15:03.507855 kernel: printk: console [tty0] enabled Mar 7 01:15:03.507871 kernel: printk: console [ttyS0] enabled Mar 7 01:15:03.507886 kernel: ACPI: Core revision 20230628 Mar 7 01:15:03.507902 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 7 01:15:03.507921 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:15:03.507937 kernel: x2apic enabled Mar 7 01:15:03.507953 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:15:03.507969 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 7 01:15:03.507985 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 7 01:15:03.508005 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:15:03.508021 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:15:03.508036 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:15:03.508051 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:15:03.508066 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:15:03.508082 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:15:03.508098 kernel: RETBleed: Vulnerable Mar 7 01:15:03.508113 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:15:03.508129 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:15:03.508144 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:15:03.508163 kernel: GDS: Unknown: Dependent on hypervisor status Mar 7 01:15:03.508178 kernel: active return thunk: its_return_thunk Mar 7 01:15:03.508193 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:15:03.508208 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:15:03.508224 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:15:03.508239 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:15:03.508255 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 7 01:15:03.508270 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 7 01:15:03.508285 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:15:03.508300 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:15:03.508316 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:15:03.508335 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:15:03.508349 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:15:03.508365 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 7 01:15:03.508380 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 7 01:15:03.508396 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 7 01:15:03.508411 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 7 01:15:03.508424 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 7 01:15:03.508436 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 7 01:15:03.508448 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 7 01:15:03.508460 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:15:03.508473 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:15:03.508490 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:15:03.508502 kernel: landlock: Up and running. Mar 7 01:15:03.508515 kernel: SELinux: Initializing. Mar 7 01:15:03.508528 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:15:03.508540 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:15:03.508554 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Mar 7 01:15:03.508567 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:03.508581 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:03.508594 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:03.508608 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:15:03.508626 kernel: signal: max sigframe size: 3632 Mar 7 01:15:03.508642 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:15:03.508656 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:15:03.508669 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:15:03.508683 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:15:03.508697 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:15:03.508712 kernel: .... node #0, CPUs: #1 Mar 7 01:15:03.508728 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:15:03.510583 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:15:03.510608 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:15:03.510625 kernel: smpboot: Max logical packages: 1 Mar 7 01:15:03.510641 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 7 01:15:03.510657 kernel: devtmpfs: initialized Mar 7 01:15:03.510673 kernel: x86/mm: Memory block size: 128MB Mar 7 01:15:03.510689 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 7 01:15:03.510705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:15:03.510720 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:15:03.510765 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:15:03.510786 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:15:03.510802 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:15:03.510817 kernel: audit: type=2000 audit(1772846101.004:1): state=initialized audit_enabled=0 res=1 Mar 7 01:15:03.510834 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:15:03.510850 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:15:03.510866 kernel: cpuidle: using governor menu Mar 7 01:15:03.510881 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:15:03.510896 kernel: dca service started, version 1.12.1 Mar 7 01:15:03.510911 kernel: PCI: Using configuration type 1 for base access Mar 7 01:15:03.510930 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:15:03.510946 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:15:03.510962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:15:03.510978 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:15:03.510994 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:15:03.511009 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:15:03.511026 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:15:03.511041 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:15:03.511057 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:15:03.511076 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:15:03.511092 kernel: ACPI: Interpreter enabled Mar 7 01:15:03.511107 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:15:03.511123 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:15:03.511138 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:15:03.511152 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:15:03.511709 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 7 01:15:03.514808 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:15:03.515077 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:15:03.515242 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:15:03.515384 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:15:03.515403 kernel: acpiphp: Slot [3] registered Mar 7 01:15:03.515419 kernel: acpiphp: Slot [4] registered Mar 7 01:15:03.517762 kernel: acpiphp: Slot [5] registered Mar 7 01:15:03.517790 kernel: acpiphp: Slot [6] registered Mar 7 01:15:03.517808 kernel: acpiphp: Slot [7] registered Mar 7 01:15:03.517834 kernel: acpiphp: Slot [8] registered Mar 7 01:15:03.517850 kernel: acpiphp: Slot [9] registered Mar 7 01:15:03.517866 kernel: acpiphp: Slot [10] registered Mar 7 01:15:03.517884 kernel: acpiphp: Slot [11] registered Mar 7 01:15:03.517900 kernel: acpiphp: Slot [12] registered Mar 7 01:15:03.517917 kernel: acpiphp: Slot [13] registered Mar 7 01:15:03.517934 kernel: acpiphp: Slot [14] registered Mar 7 01:15:03.517951 kernel: acpiphp: Slot [15] registered Mar 7 01:15:03.517967 kernel: acpiphp: Slot [16] registered Mar 7 01:15:03.517984 kernel: acpiphp: Slot [17] registered Mar 7 01:15:03.518005 kernel: acpiphp: Slot [18] registered Mar 7 01:15:03.518022 kernel: acpiphp: Slot [19] registered Mar 7 01:15:03.518039 kernel: acpiphp: Slot [20] registered Mar 7 01:15:03.518056 kernel: acpiphp: Slot [21] registered Mar 7 01:15:03.518073 kernel: acpiphp: Slot [22] registered Mar 7 01:15:03.518090 kernel: acpiphp: Slot [23] registered Mar 7 01:15:03.518107 kernel: acpiphp: Slot [24] registered Mar 7 01:15:03.518124 kernel: acpiphp: Slot [25] registered Mar 7 01:15:03.518142 kernel: acpiphp: Slot [26] registered Mar 7 01:15:03.518163 kernel: acpiphp: Slot [27] registered Mar 7 01:15:03.518180 kernel: acpiphp: Slot [28] registered Mar 7 01:15:03.518197 kernel: acpiphp: Slot [29] registered Mar 7 01:15:03.518214 kernel: acpiphp: Slot [30] registered Mar 7 01:15:03.518231 kernel: acpiphp: Slot [31] registered Mar 7 01:15:03.518248 kernel: PCI host bridge to bus 0000:00 Mar 7 01:15:03.518466 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:15:03.518602 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:15:03.518806 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:15:03.518943 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 7 01:15:03.519067 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 7 01:15:03.519197 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:15:03.519382 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:15:03.520276 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 7 01:15:03.521516 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 7 01:15:03.521689 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:15:03.521842 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 7 01:15:03.522083 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 7 01:15:03.522234 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 7 01:15:03.522370 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 7 01:15:03.522504 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 7 01:15:03.522643 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 7 01:15:03.526797 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 7 01:15:03.526982 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 7 01:15:03.527119 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:15:03.527250 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 7 01:15:03.527380 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:15:03.527520 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 01:15:03.527664 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 7 01:15:03.527896 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 01:15:03.528040 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 7 01:15:03.528062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:15:03.528079 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:15:03.528095 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:15:03.528111 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:15:03.528127 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:15:03.528153 kernel: iommu: Default domain type: Translated Mar 7 01:15:03.528170 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:15:03.528186 kernel: efivars: Registered efivars operations Mar 7 01:15:03.528203 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:15:03.528219 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:15:03.528236 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 7 01:15:03.528253 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 7 01:15:03.528390 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 7 01:15:03.528528 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 7 01:15:03.528662 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:15:03.528682 kernel: vgaarb: loaded Mar 7 01:15:03.528698 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 7 01:15:03.528714 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 7 01:15:03.528784 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:15:03.528801 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:15:03.528817 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:15:03.528832 kernel: pnp: PnP ACPI init Mar 7 01:15:03.528847 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:15:03.528868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:15:03.528884 kernel: NET: Registered PF_INET protocol family Mar 7 01:15:03.528900 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:15:03.528916 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 7 01:15:03.528932 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:15:03.528947 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:15:03.528964 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 7 01:15:03.528979 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 7 01:15:03.528997 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:15:03.529013 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:15:03.529028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:15:03.529044 kernel: NET: Registered PF_XDP protocol family Mar 7 01:15:03.529170 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:15:03.529284 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:15:03.529396 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:15:03.529509 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 7 01:15:03.529622 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 7 01:15:03.529771 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:15:03.529792 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:15:03.529808 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:15:03.529823 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 7 01:15:03.529839 kernel: clocksource: Switched to clocksource tsc Mar 7 01:15:03.529854 kernel: Initialise system trusted keyrings Mar 7 01:15:03.529869 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 7 01:15:03.529884 kernel: Key type asymmetric registered Mar 7 01:15:03.529903 kernel: Asymmetric key parser 'x509' registered Mar 7 01:15:03.529919 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:15:03.529935 kernel: io scheduler mq-deadline registered Mar 7 01:15:03.529949 kernel: io scheduler kyber registered Mar 7 01:15:03.529965 kernel: io scheduler bfq registered Mar 7 01:15:03.529981 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:15:03.529997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:15:03.530020 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:15:03.530036 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:15:03.530067 kernel: i8042: Warning: Keylock active Mar 7 01:15:03.530089 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:15:03.530105 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:15:03.530266 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:15:03.530394 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:15:03.530517 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:15:02 UTC (1772846102) Mar 7 01:15:03.530638 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:15:03.530658 kernel: intel_pstate: CPU model not supported Mar 7 01:15:03.530679 kernel: efifb: probing for efifb Mar 7 01:15:03.530695 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 7 01:15:03.530712 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 7 01:15:03.530728 kernel: efifb: scrolling: redraw Mar 7 01:15:03.532798 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:15:03.532818 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:15:03.532835 kernel: fb0: EFI VGA frame buffer device Mar 7 01:15:03.532852 kernel: pstore: Using crash dump compression: deflate Mar 7 01:15:03.532869 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:15:03.532891 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:15:03.532907 kernel: Segment Routing with IPv6 Mar 7 01:15:03.532921 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:15:03.532935 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:15:03.532952 kernel: Key type dns_resolver registered Mar 7 01:15:03.532968 kernel: IPI shorthand broadcast: enabled Mar 7 01:15:03.533010 kernel: sched_clock: Marking stable (1387037705, 201126396)->(1957110770, -368946669) Mar 7 01:15:03.533031 kernel: registered taskstats version 1 Mar 7 01:15:03.533047 kernel: Loading compiled-in X.509 certificates Mar 7 01:15:03.533067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:15:03.533084 kernel: Key type .fscrypt registered Mar 7 01:15:03.533100 kernel: Key type fscrypt-provisioning registered Mar 7 01:15:03.533114 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:15:03.533129 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:15:03.533146 kernel: ima: No architecture policies found Mar 7 01:15:03.533162 kernel: clk: Disabling unused clocks Mar 7 01:15:03.533179 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:15:03.533195 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:15:03.533215 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:15:03.533232 kernel: Run /init as init process Mar 7 01:15:03.533248 kernel: with arguments: Mar 7 01:15:03.533266 kernel: /init Mar 7 01:15:03.533282 kernel: with environment: Mar 7 01:15:03.533298 kernel: HOME=/ Mar 7 01:15:03.533315 kernel: TERM=linux Mar 7 01:15:03.533331 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:15:03.533352 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:15:03.533376 systemd[1]: Detected virtualization amazon. Mar 7 01:15:03.533394 systemd[1]: Detected architecture x86-64. Mar 7 01:15:03.533410 systemd[1]: Running in initrd. Mar 7 01:15:03.533427 systemd[1]: No hostname configured, using default hostname. Mar 7 01:15:03.533444 systemd[1]: Hostname set to . Mar 7 01:15:03.533461 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:15:03.533479 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:15:03.533497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:03.533518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:03.533536 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:15:03.533554 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:15:03.533574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:15:03.533591 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:15:03.533617 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:15:03.533635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:15:03.533652 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:03.533670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:03.533688 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:15:03.533704 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:15:03.533722 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:15:03.533782 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:15:03.533799 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:15:03.533817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:15:03.533835 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:15:03.533851 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:15:03.533868 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:03.533885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:03.533901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:03.533921 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:15:03.533938 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:15:03.533955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:15:03.533972 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:15:03.533988 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:15:03.534035 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:15:03.534051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:15:03.534065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:03.534128 systemd-journald[180]: Collecting audit messages is disabled. Mar 7 01:15:03.534169 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:15:03.534186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:03.534205 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:15:03.534229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:03.534250 systemd-journald[180]: Journal started Mar 7 01:15:03.534288 systemd-journald[180]: Runtime Journal (/run/log/journal/ec28b34d7522913c3fe82a29bbb73ad6) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:15:03.532207 systemd-modules-load[181]: Inserted module 'overlay' Mar 7 01:15:03.556757 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:03.577011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:15:03.580768 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:15:03.596927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:15:03.601189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:15:03.613704 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:15:03.613764 kernel: Bridge firewalling registered Mar 7 01:15:03.613859 systemd-modules-load[181]: Inserted module 'br_netfilter' Mar 7 01:15:03.617163 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:03.619854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:03.627081 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:15:03.635101 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:03.639077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:15:03.644865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:03.656004 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:03.659647 dracut-cmdline[204]: dracut-dracut-053 Mar 7 01:15:03.664615 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:03.665592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:15:03.686652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:03.731331 systemd-resolved[219]: Positive Trust Anchors: Mar 7 01:15:03.731349 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:15:03.731411 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:15:03.739805 systemd-resolved[219]: Defaulting to hostname 'linux'. Mar 7 01:15:03.743264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:15:03.743985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:03.779787 kernel: SCSI subsystem initialized Mar 7 01:15:03.791775 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:15:03.804762 kernel: iscsi: registered transport (tcp) Mar 7 01:15:03.829493 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:15:03.829961 kernel: QLogic iSCSI HBA Driver Mar 7 01:15:03.879105 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:15:03.884006 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:15:03.920644 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:15:03.920726 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:15:03.920777 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:15:03.991780 kernel: raid6: avx512x4 gen() 12566 MB/s Mar 7 01:15:04.010800 kernel: raid6: avx512x2 gen() 14068 MB/s Mar 7 01:15:04.030093 kernel: raid6: avx512x1 gen() 14853 MB/s Mar 7 01:15:04.048781 kernel: raid6: avx2x4 gen() 9443 MB/s Mar 7 01:15:04.071787 kernel: raid6: avx2x2 gen() 12883 MB/s Mar 7 01:15:04.091022 kernel: raid6: avx2x1 gen() 10649 MB/s Mar 7 01:15:04.091113 kernel: raid6: using algorithm avx512x1 gen() 14853 MB/s Mar 7 01:15:04.111014 kernel: raid6: .... xor() 19968 MB/s, rmw enabled Mar 7 01:15:04.111124 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:15:04.136772 kernel: xor: automatically using best checksumming function avx Mar 7 01:15:04.301771 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:15:04.314083 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:15:04.320023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:04.337017 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:15:04.342407 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:04.351047 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:15:04.369484 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 7 01:15:04.403119 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:15:04.408975 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:15:04.482903 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:04.493971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:15:04.522025 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:15:04.526599 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:15:04.528324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:04.529237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:15:04.539148 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:15:04.569222 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:15:04.591757 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:15:04.611830 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:15:04.613393 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 01:15:04.613655 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 01:15:04.620851 kernel: AES CTR mode by8 optimization enabled Mar 7 01:15:04.640802 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 7 01:15:04.641080 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:e1:e1:27:51:81 Mar 7 01:15:04.640053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:15:04.640537 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:04.643160 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:15:04.651413 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:04.652182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:04.652430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:04.652993 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:04.665094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:04.671956 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 01:15:04.675771 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 7 01:15:04.694021 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 01:15:04.694396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:04.694711 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:04.708454 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:15:04.708547 kernel: GPT:9289727 != 33554431 Mar 7 01:15:04.708569 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:15:04.708589 kernel: GPT:9289727 != 33554431 Mar 7 01:15:04.712019 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:15:04.712093 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:15:04.713975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:04.735527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:04.741977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:04.774379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:04.786756 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (445) Mar 7 01:15:04.804784 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (444) Mar 7 01:15:04.855956 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 01:15:04.873538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 01:15:04.912777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:15:04.921620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 01:15:04.922328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 01:15:04.934040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:15:04.943523 disk-uuid[629]: Primary Header is updated. Mar 7 01:15:04.943523 disk-uuid[629]: Secondary Entries is updated. Mar 7 01:15:04.943523 disk-uuid[629]: Secondary Header is updated. Mar 7 01:15:04.948826 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:15:04.956769 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:15:04.966940 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:15:05.969937 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:15:05.970014 disk-uuid[630]: The operation has completed successfully. Mar 7 01:15:06.147587 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:15:06.147724 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:15:06.179160 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:15:06.185004 sh[971]: Success Mar 7 01:15:06.208775 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:15:06.351860 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:15:06.363907 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:15:06.367790 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:15:06.427017 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:15:06.427112 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:06.427149 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:15:06.431192 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:15:06.434587 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:15:06.509781 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:15:06.532995 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:15:06.534375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:15:06.541008 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:15:06.544948 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:15:06.576210 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:06.576478 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:06.576501 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:15:06.584783 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:15:06.603772 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:06.603593 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:15:06.612777 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:15:06.618027 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:15:06.679655 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:15:06.689968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:15:06.718966 systemd-networkd[1163]: lo: Link UP Mar 7 01:15:06.718984 systemd-networkd[1163]: lo: Gained carrier Mar 7 01:15:06.720651 systemd-networkd[1163]: Enumeration completed Mar 7 01:15:06.721100 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:06.721105 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:15:06.722313 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:15:06.723957 systemd[1]: Reached target network.target - Network. Mar 7 01:15:06.725414 systemd-networkd[1163]: eth0: Link UP Mar 7 01:15:06.725420 systemd-networkd[1163]: eth0: Gained carrier Mar 7 01:15:06.725436 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:06.741885 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.20.242/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:15:06.879137 ignition[1084]: Ignition 2.19.0 Mar 7 01:15:06.881020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:15:06.879149 ignition[1084]: Stage: fetch-offline Mar 7 01:15:06.879368 ignition[1084]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:06.879377 ignition[1084]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:06.879835 ignition[1084]: Ignition finished successfully Mar 7 01:15:06.888954 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:15:06.905504 ignition[1172]: Ignition 2.19.0 Mar 7 01:15:06.905522 ignition[1172]: Stage: fetch Mar 7 01:15:06.906051 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:06.906066 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:06.906188 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:06.915542 ignition[1172]: PUT result: OK Mar 7 01:15:06.917317 ignition[1172]: parsed url from cmdline: "" Mar 7 01:15:06.917324 ignition[1172]: no config URL provided Mar 7 01:15:06.917332 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:15:06.917344 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:15:06.917373 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:06.917933 ignition[1172]: PUT result: OK Mar 7 01:15:06.917985 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 01:15:06.918589 ignition[1172]: GET result: OK Mar 7 01:15:06.918822 ignition[1172]: parsing config with SHA512: c72a66f28203e793e1d1d1f187e892bf08a10860f25a0f451d8dc253e8c5dd6f0b550b307006a6ba4a2a5590a73be1a733610998795043cd841384710f4842f5 Mar 7 01:15:06.925270 unknown[1172]: fetched base config from "system" Mar 7 01:15:06.926393 ignition[1172]: fetch: fetch complete Mar 7 01:15:06.925295 unknown[1172]: fetched base config from "system" Mar 7 01:15:06.926407 ignition[1172]: fetch: fetch passed Mar 7 01:15:06.925347 unknown[1172]: fetched user config from "aws" Mar 7 01:15:06.926472 ignition[1172]: Ignition finished successfully Mar 7 01:15:06.928822 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:15:06.936968 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:15:06.954984 ignition[1178]: Ignition 2.19.0 Mar 7 01:15:06.955002 ignition[1178]: Stage: kargs Mar 7 01:15:06.955508 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:06.955522 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:06.955648 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:06.956605 ignition[1178]: PUT result: OK Mar 7 01:15:06.959696 ignition[1178]: kargs: kargs passed Mar 7 01:15:06.959810 ignition[1178]: Ignition finished successfully Mar 7 01:15:06.961767 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:15:06.965962 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:15:06.995560 ignition[1184]: Ignition 2.19.0 Mar 7 01:15:06.995579 ignition[1184]: Stage: disks Mar 7 01:15:06.996086 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:06.996100 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:06.996217 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:06.997120 ignition[1184]: PUT result: OK Mar 7 01:15:06.999887 ignition[1184]: disks: disks passed Mar 7 01:15:06.999964 ignition[1184]: Ignition finished successfully Mar 7 01:15:07.001599 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:15:07.002641 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:15:07.003225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:15:07.003806 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:15:07.004389 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:15:07.005023 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:15:07.012984 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:15:07.043566 systemd-fsck[1192]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:15:07.047180 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:15:07.052875 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:15:07.172757 kernel: EXT4-fs (nvme0n1p9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:15:07.173390 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:15:07.174649 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:15:07.181912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:15:07.184892 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:15:07.186654 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:15:07.188865 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:15:07.188915 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:15:07.206025 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1211) Mar 7 01:15:07.213094 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:15:07.218240 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:07.218278 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:07.218298 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:15:07.227751 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:15:07.227113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:15:07.231298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:15:07.469965 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:15:07.486558 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:15:07.493278 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:15:07.500813 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:15:07.704200 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:15:07.710568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:15:07.721023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:15:07.735049 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:15:07.735843 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:07.768240 ignition[1324]: INFO : Ignition 2.19.0 Mar 7 01:15:07.769228 ignition[1324]: INFO : Stage: mount Mar 7 01:15:07.769550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:15:07.771032 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:07.771032 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:07.772802 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:07.772802 ignition[1324]: INFO : PUT result: OK Mar 7 01:15:07.775515 ignition[1324]: INFO : mount: mount passed Mar 7 01:15:07.775515 ignition[1324]: INFO : Ignition finished successfully Mar 7 01:15:07.777589 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:15:07.782987 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:15:07.801094 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:15:07.825003 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1336) Mar 7 01:15:07.825071 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:07.829110 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:07.831223 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:15:07.839784 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:15:07.842114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:15:07.870624 ignition[1353]: INFO : Ignition 2.19.0 Mar 7 01:15:07.870624 ignition[1353]: INFO : Stage: files Mar 7 01:15:07.872266 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:07.872266 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:07.872266 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:07.873504 ignition[1353]: INFO : PUT result: OK Mar 7 01:15:07.875833 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:15:07.877028 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:15:07.877028 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:15:07.896240 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:15:07.897342 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:15:07.898292 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:15:07.897459 unknown[1353]: wrote ssh authorized keys file for user: core Mar 7 01:15:07.907720 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:15:07.908950 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:15:07.972106 systemd-networkd[1163]: eth0: Gained IPv6LL Mar 7 01:15:08.017293 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:15:08.240923 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:15:08.240923 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:08.243857 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:15:08.693059 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:15:09.145834 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:09.145834 ignition[1353]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:15:09.150185 ignition[1353]: INFO : files: files passed Mar 7 01:15:09.150185 ignition[1353]: INFO : Ignition finished successfully Mar 7 01:15:09.150190 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:15:09.159078 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:15:09.163159 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:15:09.179198 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:15:09.179360 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:15:09.192231 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:09.192231 initrd-setup-root-after-ignition[1381]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:09.196136 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:09.196638 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:15:09.198311 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:15:09.208047 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:15:09.238096 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:15:09.238273 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:15:09.239809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:15:09.241139 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:15:09.242189 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:15:09.247030 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:15:09.265442 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:15:09.271026 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:15:09.295483 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:09.296224 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:09.297263 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:15:09.298180 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:15:09.298362 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:15:09.299695 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:15:09.300590 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:15:09.301423 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:15:09.302223 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:15:09.303152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:15:09.303975 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:15:09.304780 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:15:09.305584 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:15:09.306900 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:15:09.307708 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:15:09.308448 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:15:09.308630 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:15:09.309788 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:09.310598 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:09.311461 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:15:09.312245 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:09.312854 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:15:09.313031 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:15:09.314576 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:15:09.314946 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:15:09.315622 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:15:09.315804 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:15:09.323014 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:15:09.323668 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:15:09.324049 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:09.330142 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:15:09.332645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:15:09.333568 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:09.336905 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:15:09.339026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:15:09.343944 ignition[1405]: INFO : Ignition 2.19.0 Mar 7 01:15:09.343944 ignition[1405]: INFO : Stage: umount Mar 7 01:15:09.343944 ignition[1405]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:09.343944 ignition[1405]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:15:09.343944 ignition[1405]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:15:09.348712 ignition[1405]: INFO : PUT result: OK Mar 7 01:15:09.350066 ignition[1405]: INFO : umount: umount passed Mar 7 01:15:09.350862 ignition[1405]: INFO : Ignition finished successfully Mar 7 01:15:09.352446 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:15:09.352599 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:15:09.353832 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:15:09.353956 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:15:09.357315 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:15:09.357412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:15:09.358107 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:15:09.358173 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:15:09.359169 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:15:09.359231 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:15:09.360907 systemd[1]: Stopped target network.target - Network. Mar 7 01:15:09.361701 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:15:09.361802 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:15:09.363140 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:15:09.364166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:15:09.365800 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:09.368364 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:15:09.368838 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:15:09.369316 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:15:09.369384 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:15:09.370974 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:15:09.371034 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:15:09.372086 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:15:09.372158 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:15:09.373856 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:15:09.373923 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:15:09.374908 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:15:09.375476 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:15:09.378506 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:15:09.379852 systemd-networkd[1163]: eth0: DHCPv6 lease lost Mar 7 01:15:09.382141 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:15:09.382286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:15:09.385601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:15:09.385678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:09.391947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:15:09.392504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:15:09.392591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:15:09.393328 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:09.394248 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:15:09.394388 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:15:09.402271 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:15:09.402420 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:15:09.410692 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:15:09.411217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:09.414073 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:15:09.414150 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:09.415941 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:15:09.415981 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:09.416337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:15:09.416387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:15:09.416879 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:15:09.416919 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:15:09.417290 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:15:09.417326 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:09.417699 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:15:09.417880 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:15:09.425048 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:15:09.425562 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:15:09.425656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:09.426458 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:15:09.426530 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:09.427313 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:15:09.427380 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:09.428073 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:15:09.428132 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:09.429123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:09.429181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:09.431455 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:15:09.431591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:15:09.439787 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:15:09.440914 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:15:09.441714 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:15:09.448945 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:15:09.458238 systemd[1]: Switching root. Mar 7 01:15:09.488099 systemd-journald[180]: Journal stopped Mar 7 01:15:11.264562 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Mar 7 01:15:11.264654 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:15:11.264681 kernel: SELinux: policy capability open_perms=1 Mar 7 01:15:11.264699 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:15:11.264718 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:15:11.270789 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:15:11.270834 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:15:11.270853 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:15:11.270872 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:15:11.270890 kernel: audit: type=1403 audit(1772846109.889:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:15:11.270926 systemd[1]: Successfully loaded SELinux policy in 65.965ms. Mar 7 01:15:11.270959 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.022ms. Mar 7 01:15:11.270983 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:15:11.271003 systemd[1]: Detected virtualization amazon. Mar 7 01:15:11.271023 systemd[1]: Detected architecture x86-64. Mar 7 01:15:11.271043 systemd[1]: Detected first boot. Mar 7 01:15:11.271064 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:15:11.271084 zram_generator::config[1448]: No configuration found. Mar 7 01:15:11.271111 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:15:11.271131 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:15:11.271152 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:15:11.271175 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:15:11.271199 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:15:11.271225 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:15:11.271249 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:15:11.271272 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:15:11.271295 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:15:11.271322 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:15:11.271343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:15:11.271365 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:15:11.271387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:11.271410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:11.271432 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:15:11.271454 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:15:11.271476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:15:11.271498 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:15:11.271523 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:15:11.271545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:11.271568 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:15:11.271590 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:15:11.271613 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:15:11.271635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:15:11.271657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:11.271682 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:15:11.271704 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:15:11.271726 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:15:11.271769 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:15:11.271789 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:15:11.271807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:11.271824 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:11.271843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:11.271869 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:15:11.271889 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:15:11.271920 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:15:11.271939 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:15:11.271959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:11.271984 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:15:11.272004 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:15:11.272022 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:15:11.272042 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:15:11.272062 systemd[1]: Reached target machines.target - Containers. Mar 7 01:15:11.272085 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:15:11.272104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:15:11.272123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:15:11.272141 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:15:11.272160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:15:11.272179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:15:11.272200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:15:11.272222 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:15:11.272244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:15:11.272271 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:15:11.272293 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:15:11.272315 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:15:11.272338 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:15:11.272359 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:15:11.272382 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:15:11.272403 kernel: loop: module loaded Mar 7 01:15:11.272425 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:15:11.272447 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:15:11.272469 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:15:11.272490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:15:11.272513 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:15:11.272532 systemd[1]: Stopped verity-setup.service. Mar 7 01:15:11.272552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:11.272573 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:15:11.272594 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:15:11.272615 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:15:11.272640 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:15:11.272669 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:15:11.272688 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:15:11.272708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:11.273001 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:15:11.273057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:15:11.273083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:15:11.273107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:15:11.273133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:15:11.273155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:15:11.273180 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:15:11.273205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:15:11.273224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:15:11.273245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:11.273311 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:15:11.274591 kernel: ACPI: bus type drm_connector registered Mar 7 01:15:11.274628 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:15:11.274649 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:15:11.274669 kernel: fuse: init (API version 7.39) Mar 7 01:15:11.274695 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:15:11.274714 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:15:11.274753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:15:11.274811 systemd-journald[1533]: Collecting audit messages is disabled. Mar 7 01:15:11.274849 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:15:11.274869 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:15:11.274889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:15:11.274913 systemd-journald[1533]: Journal started Mar 7 01:15:11.274952 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec28b34d7522913c3fe82a29bbb73ad6) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:15:10.724576 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:15:10.760804 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 01:15:11.289644 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:15:11.289694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:15:10.761248 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:15:11.299775 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:15:11.304779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:15:11.321961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:11.332438 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:15:11.341764 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:15:11.349681 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:15:11.351029 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:15:11.351621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:15:11.371174 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:15:11.371389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:15:11.372540 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:15:11.373652 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:15:11.375319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:15:11.398421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:11.414553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:15:11.424950 kernel: loop0: detected capacity change from 0 to 61336 Mar 7 01:15:11.439115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:15:11.450933 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:15:11.462655 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:15:11.469083 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:15:11.480514 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:15:11.486237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:11.487229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:15:11.496216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:15:11.516268 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec28b34d7522913c3fe82a29bbb73ad6 is 104.191ms for 990 entries. Mar 7 01:15:11.516268 systemd-journald[1533]: System Journal (/var/log/journal/ec28b34d7522913c3fe82a29bbb73ad6) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:15:11.635169 systemd-journald[1533]: Received client request to flush runtime journal. Mar 7 01:15:11.636347 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:15:11.520293 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:15:11.523342 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:15:11.542060 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:15:11.606183 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:15:11.623940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:15:11.645867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:15:11.667767 kernel: loop2: detected capacity change from 0 to 228704 Mar 7 01:15:11.708070 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Mar 7 01:15:11.709355 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Mar 7 01:15:11.726278 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:11.773080 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:15:11.894755 kernel: loop4: detected capacity change from 0 to 61336 Mar 7 01:15:11.930481 kernel: loop5: detected capacity change from 0 to 140768 Mar 7 01:15:11.961793 kernel: loop6: detected capacity change from 0 to 228704 Mar 7 01:15:12.003768 kernel: loop7: detected capacity change from 0 to 142488 Mar 7 01:15:12.029962 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 01:15:12.032312 (sd-merge)[1605]: Merged extensions into '/usr'. Mar 7 01:15:12.046067 systemd[1]: Reloading requested from client PID 1558 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:15:12.046085 systemd[1]: Reloading... Mar 7 01:15:12.192922 zram_generator::config[1629]: No configuration found. Mar 7 01:15:12.417763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:12.508946 systemd[1]: Reloading finished in 461 ms. Mar 7 01:15:12.540722 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:15:12.541588 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:15:12.554027 systemd[1]: Starting ensure-sysext.service... Mar 7 01:15:12.556417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:15:12.561010 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:12.585417 systemd[1]: Reloading requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:15:12.585432 systemd[1]: Reloading... Mar 7 01:15:12.620460 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:15:12.621114 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:15:12.622553 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:15:12.623074 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Mar 7 01:15:12.623177 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Mar 7 01:15:12.628545 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:15:12.628759 systemd-tmpfiles[1684]: Skipping /boot Mar 7 01:15:12.632850 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Mar 7 01:15:12.671288 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:15:12.671304 systemd-tmpfiles[1684]: Skipping /boot Mar 7 01:15:12.683562 ldconfig[1551]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:15:12.703767 zram_generator::config[1712]: No configuration found. Mar 7 01:15:12.889896 (udev-worker)[1743]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:15:12.966780 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:15:12.973258 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:15:12.973352 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 7 01:15:12.974915 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:15:12.998319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:13.017797 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:15:13.044801 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Mar 7 01:15:13.064806 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1734) Mar 7 01:15:13.143118 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:15:13.143356 systemd[1]: Reloading finished in 557 ms. Mar 7 01:15:13.169021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:13.171436 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:15:13.178409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:13.221240 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:15:13.225935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:15:13.235725 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:15:13.247191 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:15:13.259081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:15:13.271620 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:15:13.302316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:15:13.310277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:13.332421 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:13.332918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:15:13.346054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:15:13.357194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:15:13.363302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:15:13.364115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:15:13.364530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:13.384017 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:15:13.390912 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:15:13.387512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:13.387865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:13.410627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:15:13.416664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:15:13.420497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:13.423727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:15:13.435029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:15:13.437006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:15:13.437472 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:15:13.448699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:13.449376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:13.455707 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:15:13.456708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:15:13.465811 systemd[1]: Finished ensure-sysext.service. Mar 7 01:15:13.468562 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:15:13.481858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:15:13.490810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:15:13.491092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:15:13.493589 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:15:13.495224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:15:13.504308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:15:13.504455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:15:13.510170 augenrules[1904]: No rules Mar 7 01:15:13.517030 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:15:13.520157 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:15:13.560834 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:15:13.564056 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:15:13.569047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:15:13.611375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:15:13.631976 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:15:13.633102 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:15:13.645975 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:15:13.680650 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:15:13.692876 lvm[1932]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:15:13.713395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:13.726667 systemd-resolved[1864]: Positive Trust Anchors: Mar 7 01:15:13.726692 systemd-resolved[1864]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:15:13.726776 systemd-resolved[1864]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:15:13.738120 systemd-networkd[1862]: lo: Link UP Mar 7 01:15:13.738536 systemd-networkd[1862]: lo: Gained carrier Mar 7 01:15:13.739127 systemd-resolved[1864]: Defaulting to hostname 'linux'. Mar 7 01:15:13.740546 systemd-networkd[1862]: Enumeration completed Mar 7 01:15:13.740813 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:15:13.741617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:15:13.742247 systemd-networkd[1862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:13.742253 systemd-networkd[1862]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:15:13.742474 systemd[1]: Reached target network.target - Network. Mar 7 01:15:13.743628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:13.747876 systemd-networkd[1862]: eth0: Link UP Mar 7 01:15:13.748164 systemd-networkd[1862]: eth0: Gained carrier Mar 7 01:15:13.748197 systemd-networkd[1862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:13.760080 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:15:13.761043 systemd-networkd[1862]: eth0: DHCPv4 address 172.31.20.242/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:15:13.761221 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:15:13.763518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:13.765056 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:15:13.766015 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:15:13.766676 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:15:13.767515 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:15:13.768244 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:15:13.768833 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:15:13.769417 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:15:13.769472 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:15:13.770022 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:15:13.771310 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:15:13.773681 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:15:13.780238 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:15:13.782437 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:15:13.784086 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:15:13.784911 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:15:13.785568 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:15:13.786237 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:15:13.786277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:15:13.789890 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:15:13.799806 lvm[1943]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:15:13.804508 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:15:13.808366 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:15:13.816933 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:15:13.820982 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:15:13.821658 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:15:13.827045 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:15:13.838968 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:15:13.855948 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:15:13.868693 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:15:13.873042 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:15:13.877918 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:15:13.895528 jq[1947]: false Mar 7 01:15:13.898528 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:15:13.899679 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:15:13.901030 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:15:13.904011 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:15:13.912887 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:15:13.924834 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:15:13.926719 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:15:13.927034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:15:13.931319 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:15:13.931607 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:15:13.950662 (ntainerd)[1963]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:15:13.998413 jq[1959]: true Mar 7 01:15:14.029693 ntpd[1950]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: ---------------------------------------------------- Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: corporation. Support and training for ntp-4 are Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: available at https://www.nwtime.org/support Mar 7 01:15:14.035457 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: ---------------------------------------------------- Mar 7 01:15:14.029728 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:15:14.030039 ntpd[1950]: ---------------------------------------------------- Mar 7 01:15:14.030050 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:15:14.030062 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:15:14.030072 ntpd[1950]: corporation. Support and training for ntp-4 are Mar 7 01:15:14.030082 ntpd[1950]: available at https://www.nwtime.org/support Mar 7 01:15:14.030091 ntpd[1950]: ---------------------------------------------------- Mar 7 01:15:14.051076 ntpd[1950]: proto: precision = 0.075 usec (-24) Mar 7 01:15:14.054512 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: proto: precision = 0.075 usec (-24) Mar 7 01:15:14.055441 ntpd[1950]: basedate set to 2026-02-22 Mar 7 01:15:14.055906 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: basedate set to 2026-02-22 Mar 7 01:15:14.055906 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: gps base set to 2026-02-22 (week 2407) Mar 7 01:15:14.055469 ntpd[1950]: gps base set to 2026-02-22 (week 2407) Mar 7 01:15:14.063870 extend-filesystems[1948]: Found loop4 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found loop5 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found loop6 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found loop7 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p1 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p2 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p3 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found usr Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p4 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p6 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p7 Mar 7 01:15:14.063870 extend-filesystems[1948]: Found nvme0n1p9 Mar 7 01:15:14.063870 extend-filesystems[1948]: Checking size of /dev/nvme0n1p9 Mar 7 01:15:14.069718 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Listen normally on 3 eth0 172.31.20.242:123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Listen normally on 4 lo [::1]:123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: bind(21) AF_INET6 fe80::4e1:e1ff:fe27:5181%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: unable to create socket on eth0 (5) for fe80::4e1:e1ff:fe27:5181%2#123 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: failed to init interface for address fe80::4e1:e1ff:fe27:5181%2 Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: Listening on routing socket on fd #21 for interface updates Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:15:14.132865 ntpd[1950]: 7 Mar 01:15:14 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:15:14.133255 update_engine[1958]: I20260307 01:15:14.106837 1958 main.cc:92] Flatcar Update Engine starting Mar 7 01:15:14.133255 update_engine[1958]: I20260307 01:15:14.114439 1958 update_check_scheduler.cc:74] Next update check in 2m2s Mar 7 01:15:14.133555 tar[1961]: linux-amd64/LICENSE Mar 7 01:15:14.133555 tar[1961]: linux-amd64/helm Mar 7 01:15:14.077866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:15:14.069807 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:15:14.084098 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:15:14.070023 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:15:14.084137 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:15:14.070063 ntpd[1950]: Listen normally on 3 eth0 172.31.20.242:123 Mar 7 01:15:14.084785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:15:14.070118 ntpd[1950]: Listen normally on 4 lo [::1]:123 Mar 7 01:15:14.084814 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:15:14.070168 ntpd[1950]: bind(21) AF_INET6 fe80::4e1:e1ff:fe27:5181%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:15:14.116234 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:15:14.070194 ntpd[1950]: unable to create socket on eth0 (5) for fe80::4e1:e1ff:fe27:5181%2#123 Mar 7 01:15:14.116830 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:15:14.070213 ntpd[1950]: failed to init interface for address fe80::4e1:e1ff:fe27:5181%2 Mar 7 01:15:14.124500 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:15:14.070250 ntpd[1950]: Listening on routing socket on fd #21 for interface updates Mar 7 01:15:14.141172 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:15:14.076821 dbus-daemon[1946]: [system] SELinux support is enabled Mar 7 01:15:14.147974 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:15:14.159460 extend-filesystems[1948]: Resized partition /dev/nvme0n1p9 Mar 7 01:15:14.086713 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:15:14.164284 jq[1979]: true Mar 7 01:15:14.094802 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:15:14.183201 extend-filesystems[1999]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:15:14.209352 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 01:15:14.111318 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1862 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:15:14.207177 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:15:14.310778 coreos-metadata[1945]: Mar 07 01:15:14.309 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:15:14.317636 coreos-metadata[1945]: Mar 07 01:15:14.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 01:15:14.348776 coreos-metadata[1945]: Mar 07 01:15:14.346 INFO Fetch successful Mar 7 01:15:14.348776 coreos-metadata[1945]: Mar 07 01:15:14.347 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 01:15:14.349996 coreos-metadata[1945]: Mar 07 01:15:14.349 INFO Fetch successful Mar 7 01:15:14.350090 coreos-metadata[1945]: Mar 07 01:15:14.350 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 01:15:14.353107 coreos-metadata[1945]: Mar 07 01:15:14.352 INFO Fetch successful Mar 7 01:15:14.353107 coreos-metadata[1945]: Mar 07 01:15:14.352 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 01:15:14.356792 coreos-metadata[1945]: Mar 07 01:15:14.356 INFO Fetch successful Mar 7 01:15:14.356792 coreos-metadata[1945]: Mar 07 01:15:14.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 01:15:14.360476 coreos-metadata[1945]: Mar 07 01:15:14.359 INFO Fetch failed with 404: resource not found Mar 7 01:15:14.360476 coreos-metadata[1945]: Mar 07 01:15:14.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 01:15:14.361093 coreos-metadata[1945]: Mar 07 01:15:14.361 INFO Fetch successful Mar 7 01:15:14.361093 coreos-metadata[1945]: Mar 07 01:15:14.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 01:15:14.364072 coreos-metadata[1945]: Mar 07 01:15:14.362 INFO Fetch successful Mar 7 01:15:14.364072 coreos-metadata[1945]: Mar 07 01:15:14.362 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 01:15:14.367140 coreos-metadata[1945]: Mar 07 01:15:14.366 INFO Fetch successful Mar 7 01:15:14.367140 coreos-metadata[1945]: Mar 07 01:15:14.367 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 01:15:14.373032 coreos-metadata[1945]: Mar 07 01:15:14.370 INFO Fetch successful Mar 7 01:15:14.373148 coreos-metadata[1945]: Mar 07 01:15:14.373 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 01:15:14.375173 coreos-metadata[1945]: Mar 07 01:15:14.374 INFO Fetch successful Mar 7 01:15:14.379515 systemd-logind[1957]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:15:14.380384 systemd-logind[1957]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 7 01:15:14.380516 systemd-logind[1957]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:15:14.381267 systemd-logind[1957]: New seat seat0. Mar 7 01:15:14.382772 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:15:14.393870 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1728) Mar 7 01:15:14.433202 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:15:14.442120 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1995 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:15:14.444006 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:15:14.436723 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:15:14.440848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:15:14.457153 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:15:14.466132 systemd[1]: Starting sshkeys.service... Mar 7 01:15:14.472755 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 01:15:14.498999 extend-filesystems[1999]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 01:15:14.498999 extend-filesystems[1999]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:15:14.498999 extend-filesystems[1999]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 01:15:14.501249 extend-filesystems[1948]: Resized filesystem in /dev/nvme0n1p9 Mar 7 01:15:14.501087 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:15:14.501356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:15:14.518683 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:15:14.520878 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:15:14.558129 polkitd[2040]: Started polkitd version 121 Mar 7 01:15:14.559311 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:15:14.571429 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:15:14.592691 polkitd[2040]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:15:14.598369 polkitd[2040]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:15:14.600828 polkitd[2040]: Finished loading, compiling and executing 2 rules Mar 7 01:15:14.609056 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:15:14.609277 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:15:14.613263 polkitd[2040]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:15:14.670618 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:15:14.674428 containerd[1963]: time="2026-03-07T01:15:14.671279998Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:15:14.679926 systemd-hostnamed[1995]: Hostname set to (transient) Mar 7 01:15:14.680917 systemd-resolved[1864]: System hostname changed to 'ip-172-31-20-242'. Mar 7 01:15:14.813943 containerd[1963]: time="2026-03-07T01:15:14.813657720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.815859 containerd[1963]: time="2026-03-07T01:15:14.815804814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:14.816966 containerd[1963]: time="2026-03-07T01:15:14.815976908Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:15:14.816966 containerd[1963]: time="2026-03-07T01:15:14.816005010Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:15:14.816966 containerd[1963]: time="2026-03-07T01:15:14.816188569Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:15:14.816966 containerd[1963]: time="2026-03-07T01:15:14.816209982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.816966 containerd[1963]: time="2026-03-07T01:15:14.816277981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:14.816966 containerd[1963]: time="2026-03-07T01:15:14.816322692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.822163 coreos-metadata[2064]: Mar 07 01:15:14.818 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:15:14.822546 containerd[1963]: time="2026-03-07T01:15:14.820069755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:14.822546 containerd[1963]: time="2026-03-07T01:15:14.820109039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.822546 containerd[1963]: time="2026-03-07T01:15:14.820134859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:14.822546 containerd[1963]: time="2026-03-07T01:15:14.820150151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.822546 containerd[1963]: time="2026-03-07T01:15:14.820290848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.822546 containerd[1963]: time="2026-03-07T01:15:14.820536745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:14.824095 coreos-metadata[2064]: Mar 07 01:15:14.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 01:15:14.824200 containerd[1963]: time="2026-03-07T01:15:14.823497441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:14.824200 containerd[1963]: time="2026-03-07T01:15:14.823535256Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:15:14.824200 containerd[1963]: time="2026-03-07T01:15:14.823682665Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:15:14.824200 containerd[1963]: time="2026-03-07T01:15:14.823752978Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:15:14.824663 coreos-metadata[2064]: Mar 07 01:15:14.824 INFO Fetch successful Mar 7 01:15:14.824663 coreos-metadata[2064]: Mar 07 01:15:14.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 01:15:14.826881 coreos-metadata[2064]: Mar 07 01:15:14.826 INFO Fetch successful Mar 7 01:15:14.828676 unknown[2064]: wrote ssh authorized keys file for user: core Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.831666573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.831766560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.831792232Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.831824061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.831853852Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832033777Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832356855Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832481465Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832504152Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832525229Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832546083Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832570760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832589103Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.834758 containerd[1963]: time="2026-03-07T01:15:14.832610026Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832630175Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832649391Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832669221Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832686006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832714109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832754588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832775288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832796724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832814807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832855208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832876241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832893812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832913996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835319 containerd[1963]: time="2026-03-07T01:15:14.832935006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.832950965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.832968809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.832992939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833014813Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833045831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833061069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833078024Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833140572Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833163082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833180243Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833199179Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833214021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833231982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:15:14.835816 containerd[1963]: time="2026-03-07T01:15:14.833272950Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:15:14.836282 containerd[1963]: time="2026-03-07T01:15:14.833293194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:15:14.838783 sshd_keygen[1976]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:15:14.845329 containerd[1963]: time="2026-03-07T01:15:14.833722900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:15:14.845329 containerd[1963]: time="2026-03-07T01:15:14.844576643Z" level=info msg="Connect containerd service" Mar 7 01:15:14.845329 containerd[1963]: time="2026-03-07T01:15:14.844647656Z" level=info msg="using legacy CRI server" Mar 7 01:15:14.845329 containerd[1963]: time="2026-03-07T01:15:14.844660989Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:15:14.845329 containerd[1963]: time="2026-03-07T01:15:14.844839369Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:15:14.851911 containerd[1963]: time="2026-03-07T01:15:14.851684586Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:15:14.854395 containerd[1963]: time="2026-03-07T01:15:14.854316283Z" level=info msg="Start subscribing containerd event" Mar 7 01:15:14.855206 containerd[1963]: time="2026-03-07T01:15:14.854554733Z" level=info msg="Start recovering state" Mar 7 01:15:14.855206 containerd[1963]: time="2026-03-07T01:15:14.854654890Z" level=info msg="Start event monitor" Mar 7 01:15:14.855206 containerd[1963]: time="2026-03-07T01:15:14.854677644Z" level=info msg="Start snapshots syncer" Mar 7 01:15:14.855206 containerd[1963]: time="2026-03-07T01:15:14.854690917Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:15:14.855206 containerd[1963]: time="2026-03-07T01:15:14.854702497Z" level=info msg="Start streaming server" Mar 7 01:15:14.863137 containerd[1963]: time="2026-03-07T01:15:14.859381884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:15:14.863137 containerd[1963]: time="2026-03-07T01:15:14.859529663Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:15:14.859782 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:15:14.877442 containerd[1963]: time="2026-03-07T01:15:14.877102967Z" level=info msg="containerd successfully booted in 0.206894s" Mar 7 01:15:14.884243 systemd-networkd[1862]: eth0: Gained IPv6LL Mar 7 01:15:14.889320 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:15:14.892254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:15:14.900449 update-ssh-keys[2133]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:15:14.913296 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:15:14.928234 systemd[1]: Finished sshkeys.service. Mar 7 01:15:14.959531 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:15:14.968241 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 01:15:14.974688 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:15:14.985134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:14.994142 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:15:15.048952 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:15:15.052678 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:15:15.066109 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:15:15.067392 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:15:15.103116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:15:15.113351 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:15:15.125178 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:15:15.126407 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:15:15.141342 amazon-ssm-agent[2158]: Initializing new seelog logger Mar 7 01:15:15.141342 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Mar 7 01:15:15.142244 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.142244 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.143867 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 processing appconfig overrides Mar 7 01:15:15.144941 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.144941 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.145059 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 processing appconfig overrides Mar 7 01:15:15.145709 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.145709 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.145709 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 processing appconfig overrides Mar 7 01:15:15.146820 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO Proxy environment variables: Mar 7 01:15:15.150152 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.150152 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:15:15.150312 amazon-ssm-agent[2158]: 2026/03/07 01:15:15 processing appconfig overrides Mar 7 01:15:15.247303 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO https_proxy: Mar 7 01:15:15.346345 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO http_proxy: Mar 7 01:15:15.443617 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO no_proxy: Mar 7 01:15:15.542224 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO Checking if agent identity type OnPrem can be assumed Mar 7 01:15:15.640989 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO Checking if agent identity type EC2 can be assumed Mar 7 01:15:15.666786 tar[1961]: linux-amd64/README.md Mar 7 01:15:15.681312 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:15:15.691797 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO Agent will take identity from EC2 Mar 7 01:15:15.691797 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:15:15.691797 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:15:15.691797 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [Registrar] Starting registrar module Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [EC2Identity] EC2 registration was successful. Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [CredentialRefresher] credentialRefresher has started Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 01:15:15.692550 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 01:15:15.740173 amazon-ssm-agent[2158]: 2026-03-07 01:15:15 INFO [CredentialRefresher] Next credential rotation will be in 30.058244720683334 minutes Mar 7 01:15:16.492518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:16.494246 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:15:16.496900 systemd[1]: Startup finished in 1.795s (kernel) + 6.840s (initrd) + 6.668s (userspace) = 15.305s. Mar 7 01:15:16.504847 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:16.712089 amazon-ssm-agent[2158]: 2026-03-07 01:15:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 01:15:16.813820 amazon-ssm-agent[2158]: 2026-03-07 01:15:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2202) started Mar 7 01:15:16.912904 amazon-ssm-agent[2158]: 2026-03-07 01:15:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 01:15:17.030611 ntpd[1950]: Listen normally on 6 eth0 [fe80::4e1:e1ff:fe27:5181%2]:123 Mar 7 01:15:17.031102 ntpd[1950]: 7 Mar 01:15:17 ntpd[1950]: Listen normally on 6 eth0 [fe80::4e1:e1ff:fe27:5181%2]:123 Mar 7 01:15:17.278312 kubelet[2192]: E0307 01:15:17.278126 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:17.281196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:17.281434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:17.282117 systemd[1]: kubelet.service: Consumed 1.111s CPU time. Mar 7 01:15:17.381918 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:15:17.391212 systemd[1]: Started sshd@0-172.31.20.242:22-68.220.241.50:46760.service - OpenSSH per-connection server daemon (68.220.241.50:46760). Mar 7 01:15:17.880677 sshd[2216]: Accepted publickey for core from 68.220.241.50 port 46760 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:17.882081 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:17.894219 systemd-logind[1957]: New session 1 of user core. Mar 7 01:15:17.896138 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:15:17.901489 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:15:17.917826 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:15:17.926118 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:15:17.930594 (systemd)[2220]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:15:18.050386 systemd[2220]: Queued start job for default target default.target. Mar 7 01:15:18.058147 systemd[2220]: Created slice app.slice - User Application Slice. Mar 7 01:15:18.058193 systemd[2220]: Reached target paths.target - Paths. Mar 7 01:15:18.058215 systemd[2220]: Reached target timers.target - Timers. Mar 7 01:15:18.059897 systemd[2220]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:15:18.079965 systemd[2220]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:15:18.080139 systemd[2220]: Reached target sockets.target - Sockets. Mar 7 01:15:18.080173 systemd[2220]: Reached target basic.target - Basic System. Mar 7 01:15:18.080221 systemd[2220]: Reached target default.target - Main User Target. Mar 7 01:15:18.080251 systemd[2220]: Startup finished in 142ms. Mar 7 01:15:18.080655 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:15:18.087043 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:15:18.451143 systemd[1]: Started sshd@1-172.31.20.242:22-68.220.241.50:46762.service - OpenSSH per-connection server daemon (68.220.241.50:46762). Mar 7 01:15:18.930563 sshd[2231]: Accepted publickey for core from 68.220.241.50 port 46762 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:18.932157 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:18.937028 systemd-logind[1957]: New session 2 of user core. Mar 7 01:15:18.948021 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:15:19.279793 sshd[2231]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:19.283980 systemd[1]: sshd@1-172.31.20.242:22-68.220.241.50:46762.service: Deactivated successfully. Mar 7 01:15:19.286013 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:15:19.287682 systemd-logind[1957]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:15:19.289334 systemd-logind[1957]: Removed session 2. Mar 7 01:15:19.378203 systemd[1]: Started sshd@2-172.31.20.242:22-68.220.241.50:46764.service - OpenSSH per-connection server daemon (68.220.241.50:46764). Mar 7 01:15:19.868452 sshd[2238]: Accepted publickey for core from 68.220.241.50 port 46764 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:19.869921 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:19.875468 systemd-logind[1957]: New session 3 of user core. Mar 7 01:15:19.884008 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:15:20.219658 sshd[2238]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:20.225006 systemd[1]: sshd@2-172.31.20.242:22-68.220.241.50:46764.service: Deactivated successfully. Mar 7 01:15:20.227306 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:15:20.228346 systemd-logind[1957]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:15:20.229936 systemd-logind[1957]: Removed session 3. Mar 7 01:15:20.311121 systemd[1]: Started sshd@3-172.31.20.242:22-68.220.241.50:46780.service - OpenSSH per-connection server daemon (68.220.241.50:46780). Mar 7 01:15:20.800492 sshd[2245]: Accepted publickey for core from 68.220.241.50 port 46780 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:20.802066 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:20.807594 systemd-logind[1957]: New session 4 of user core. Mar 7 01:15:20.814001 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:15:21.331847 systemd-resolved[1864]: Clock change detected. Flushing caches. Mar 7 01:15:21.456665 sshd[2245]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:21.460384 systemd[1]: sshd@3-172.31.20.242:22-68.220.241.50:46780.service: Deactivated successfully. Mar 7 01:15:21.462556 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:15:21.464292 systemd-logind[1957]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:15:21.465550 systemd-logind[1957]: Removed session 4. Mar 7 01:15:21.550386 systemd[1]: Started sshd@4-172.31.20.242:22-68.220.241.50:46794.service - OpenSSH per-connection server daemon (68.220.241.50:46794). Mar 7 01:15:22.028812 sshd[2252]: Accepted publickey for core from 68.220.241.50 port 46794 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:22.029474 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:22.035913 systemd-logind[1957]: New session 5 of user core. Mar 7 01:15:22.041277 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:15:22.315269 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:15:22.315706 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:22.328754 sudo[2255]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:22.405837 sshd[2252]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:22.410609 systemd[1]: sshd@4-172.31.20.242:22-68.220.241.50:46794.service: Deactivated successfully. Mar 7 01:15:22.413153 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:15:22.413951 systemd-logind[1957]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:15:22.415439 systemd-logind[1957]: Removed session 5. Mar 7 01:15:22.492955 systemd[1]: Started sshd@5-172.31.20.242:22-68.220.241.50:47882.service - OpenSSH per-connection server daemon (68.220.241.50:47882). Mar 7 01:15:22.983218 sshd[2260]: Accepted publickey for core from 68.220.241.50 port 47882 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:22.984833 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:22.990481 systemd-logind[1957]: New session 6 of user core. Mar 7 01:15:22.996271 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:15:23.260390 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:15:23.261244 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:23.265274 sudo[2264]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:23.271250 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:15:23.271640 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:23.285369 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:15:23.289162 auditctl[2267]: No rules Mar 7 01:15:23.289851 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:15:23.290142 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:15:23.301495 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:15:23.329252 augenrules[2285]: No rules Mar 7 01:15:23.331124 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:15:23.332662 sudo[2263]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:23.410647 sshd[2260]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:23.415202 systemd[1]: sshd@5-172.31.20.242:22-68.220.241.50:47882.service: Deactivated successfully. Mar 7 01:15:23.417420 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:15:23.418341 systemd-logind[1957]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:15:23.419480 systemd-logind[1957]: Removed session 6. Mar 7 01:15:23.498826 systemd[1]: Started sshd@6-172.31.20.242:22-68.220.241.50:47894.service - OpenSSH per-connection server daemon (68.220.241.50:47894). Mar 7 01:15:23.991019 sshd[2293]: Accepted publickey for core from 68.220.241.50 port 47894 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:23.992666 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:23.997755 systemd-logind[1957]: New session 7 of user core. Mar 7 01:15:24.006249 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:15:24.268471 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:15:24.269223 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:24.646343 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:15:24.646700 (dockerd)[2311]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:15:25.012648 dockerd[2311]: time="2026-03-07T01:15:25.012508454Z" level=info msg="Starting up" Mar 7 01:15:25.199767 dockerd[2311]: time="2026-03-07T01:15:25.199700006Z" level=info msg="Loading containers: start." Mar 7 01:15:25.326009 kernel: Initializing XFRM netlink socket Mar 7 01:15:25.357963 (udev-worker)[2334]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:15:25.423671 systemd-networkd[1862]: docker0: Link UP Mar 7 01:15:25.443609 dockerd[2311]: time="2026-03-07T01:15:25.443552412Z" level=info msg="Loading containers: done." Mar 7 01:15:25.463574 dockerd[2311]: time="2026-03-07T01:15:25.463513050Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:15:25.463754 dockerd[2311]: time="2026-03-07T01:15:25.463658151Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:15:25.463812 dockerd[2311]: time="2026-03-07T01:15:25.463794013Z" level=info msg="Daemon has completed initialization" Mar 7 01:15:25.498750 dockerd[2311]: time="2026-03-07T01:15:25.498529064Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:15:25.498640 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:15:26.170436 containerd[1963]: time="2026-03-07T01:15:26.170388624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:15:26.987805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152404582.mount: Deactivated successfully. Mar 7 01:15:27.831079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:15:27.838287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:28.095196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:28.100264 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:28.178087 kubelet[2517]: E0307 01:15:28.177934 2517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:28.184908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:28.185847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:28.817358 containerd[1963]: time="2026-03-07T01:15:28.817307806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:28.819590 containerd[1963]: time="2026-03-07T01:15:28.819287203Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:15:28.822514 containerd[1963]: time="2026-03-07T01:15:28.821928628Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:28.826428 containerd[1963]: time="2026-03-07T01:15:28.826387302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:28.827871 containerd[1963]: time="2026-03-07T01:15:28.827828176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.657394119s" Mar 7 01:15:28.827959 containerd[1963]: time="2026-03-07T01:15:28.827877259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:15:28.828489 containerd[1963]: time="2026-03-07T01:15:28.828461750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:15:30.685664 containerd[1963]: time="2026-03-07T01:15:30.685601226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:30.687821 containerd[1963]: time="2026-03-07T01:15:30.687755891Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:15:30.692032 containerd[1963]: time="2026-03-07T01:15:30.690066320Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:30.694701 containerd[1963]: time="2026-03-07T01:15:30.694647328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:30.696799 containerd[1963]: time="2026-03-07T01:15:30.696729495Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.868228972s" Mar 7 01:15:30.696799 containerd[1963]: time="2026-03-07T01:15:30.696784839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:15:30.697829 containerd[1963]: time="2026-03-07T01:15:30.697789354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:15:32.194358 containerd[1963]: time="2026-03-07T01:15:32.194282877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:32.196632 containerd[1963]: time="2026-03-07T01:15:32.196390456Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:15:32.199742 containerd[1963]: time="2026-03-07T01:15:32.199091709Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:32.205438 containerd[1963]: time="2026-03-07T01:15:32.204939569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:32.206441 containerd[1963]: time="2026-03-07T01:15:32.206395679Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.508562884s" Mar 7 01:15:32.206554 containerd[1963]: time="2026-03-07T01:15:32.206446679Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:15:32.207951 containerd[1963]: time="2026-03-07T01:15:32.207901642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:15:33.296182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123268558.mount: Deactivated successfully. Mar 7 01:15:33.929754 containerd[1963]: time="2026-03-07T01:15:33.929682060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.930798 containerd[1963]: time="2026-03-07T01:15:33.930737203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:15:33.932125 containerd[1963]: time="2026-03-07T01:15:33.932058755Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.934667 containerd[1963]: time="2026-03-07T01:15:33.934626113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.935655 containerd[1963]: time="2026-03-07T01:15:33.935435038Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.727471089s" Mar 7 01:15:33.935655 containerd[1963]: time="2026-03-07T01:15:33.935479661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:15:33.936469 containerd[1963]: time="2026-03-07T01:15:33.936175309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:15:34.482206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070283247.mount: Deactivated successfully. Mar 7 01:15:35.756404 containerd[1963]: time="2026-03-07T01:15:35.756336819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.758511 containerd[1963]: time="2026-03-07T01:15:35.758271148Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:15:35.760959 containerd[1963]: time="2026-03-07T01:15:35.760889358Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.767438 containerd[1963]: time="2026-03-07T01:15:35.767351486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.769097 containerd[1963]: time="2026-03-07T01:15:35.768781530Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.832569543s" Mar 7 01:15:35.769097 containerd[1963]: time="2026-03-07T01:15:35.768831497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:15:35.770179 containerd[1963]: time="2026-03-07T01:15:35.770142213Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:15:36.280466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362088391.mount: Deactivated successfully. Mar 7 01:15:36.292552 containerd[1963]: time="2026-03-07T01:15:36.292488053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:36.294402 containerd[1963]: time="2026-03-07T01:15:36.294314828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:15:36.296777 containerd[1963]: time="2026-03-07T01:15:36.296705057Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:36.300660 containerd[1963]: time="2026-03-07T01:15:36.300590039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:36.305363 containerd[1963]: time="2026-03-07T01:15:36.303356651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.168658ms" Mar 7 01:15:36.305363 containerd[1963]: time="2026-03-07T01:15:36.303419964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:15:36.307260 containerd[1963]: time="2026-03-07T01:15:36.307227198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:15:36.860685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195364985.mount: Deactivated successfully. Mar 7 01:15:38.297993 containerd[1963]: time="2026-03-07T01:15:38.297916852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:38.300009 containerd[1963]: time="2026-03-07T01:15:38.299772912Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:15:38.302213 containerd[1963]: time="2026-03-07T01:15:38.302150833Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:38.307015 containerd[1963]: time="2026-03-07T01:15:38.306698800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:38.308503 containerd[1963]: time="2026-03-07T01:15:38.308069484Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.000797606s" Mar 7 01:15:38.308503 containerd[1963]: time="2026-03-07T01:15:38.308113632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:15:38.375336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:15:38.383612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:38.778259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:38.781801 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:38.836189 kubelet[2688]: E0307 01:15:38.836126 2688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:38.839654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:38.839876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:40.907493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:40.920437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:40.960709 systemd[1]: Reloading requested from client PID 2702 ('systemctl') (unit session-7.scope)... Mar 7 01:15:40.960730 systemd[1]: Reloading... Mar 7 01:15:41.102036 zram_generator::config[2740]: No configuration found. Mar 7 01:15:41.261782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:41.353399 systemd[1]: Reloading finished in 392 ms. Mar 7 01:15:41.414075 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:15:41.414198 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:15:41.414571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:41.420477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:41.615762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:41.628504 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:41.684692 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:41.684692 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:41.684692 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:41.685258 kubelet[2806]: I0307 01:15:41.684761 2806 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:42.130402 kubelet[2806]: I0307 01:15:42.130338 2806 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:15:42.130402 kubelet[2806]: I0307 01:15:42.130397 2806 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:42.130783 kubelet[2806]: I0307 01:15:42.130754 2806 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:42.182414 kubelet[2806]: I0307 01:15:42.182363 2806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:42.191008 kubelet[2806]: E0307 01:15:42.190021 2806 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:42.200707 kubelet[2806]: E0307 01:15:42.200647 2806 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:42.200707 kubelet[2806]: I0307 01:15:42.200701 2806 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:42.212324 kubelet[2806]: I0307 01:15:42.212286 2806 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:15:42.216855 kubelet[2806]: I0307 01:15:42.216782 2806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:42.220415 kubelet[2806]: I0307 01:15:42.216849 2806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-242","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:15:42.220415 kubelet[2806]: I0307 01:15:42.220422 2806 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:42.220663 kubelet[2806]: I0307 01:15:42.220441 2806 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:15:42.220663 kubelet[2806]: I0307 01:15:42.220628 2806 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:42.227639 kubelet[2806]: I0307 01:15:42.227578 2806 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:15:42.227639 kubelet[2806]: I0307 01:15:42.227634 2806 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:42.227639 kubelet[2806]: I0307 01:15:42.227669 2806 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:15:42.230131 kubelet[2806]: I0307 01:15:42.229842 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:42.236878 kubelet[2806]: E0307 01:15:42.235789 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-242&limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:42.236878 kubelet[2806]: E0307 01:15:42.236243 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:42.236878 kubelet[2806]: I0307 01:15:42.236356 2806 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:42.236878 kubelet[2806]: I0307 01:15:42.236802 2806 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:42.238007 kubelet[2806]: W0307 01:15:42.237821 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:15:42.252514 kubelet[2806]: I0307 01:15:42.252483 2806 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:15:42.252926 kubelet[2806]: I0307 01:15:42.252723 2806 server.go:1289] "Started kubelet" Mar 7 01:15:42.259999 kubelet[2806]: I0307 01:15:42.259941 2806 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:42.268305 kubelet[2806]: I0307 01:15:42.268268 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:42.268938 kubelet[2806]: I0307 01:15:42.268721 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:42.271357 kubelet[2806]: I0307 01:15:42.269641 2806 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:42.275818 kubelet[2806]: I0307 01:15:42.275791 2806 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:15:42.278508 kubelet[2806]: E0307 01:15:42.263865 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.242:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.242:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-242.189a6a2c66648e0b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-242,UID:ip-172-31-20-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-242,},FirstTimestamp:2026-03-07 01:15:42.252682763 +0000 UTC m=+0.616553719,LastTimestamp:2026-03-07 01:15:42.252682763 +0000 UTC m=+0.616553719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-242,}" Mar 7 01:15:42.282731 kubelet[2806]: I0307 01:15:42.278604 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:42.282913 kubelet[2806]: I0307 01:15:42.282898 2806 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:15:42.283533 kubelet[2806]: E0307 01:15:42.283326 2806 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:42.285054 kubelet[2806]: E0307 01:15:42.283770 2806 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-242\" not found" Mar 7 01:15:42.285054 kubelet[2806]: I0307 01:15:42.283853 2806 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:15:42.285054 kubelet[2806]: I0307 01:15:42.283912 2806 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:15:42.285054 kubelet[2806]: E0307 01:15:42.284887 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:42.290228 kubelet[2806]: I0307 01:15:42.285419 2806 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:42.290228 kubelet[2806]: I0307 01:15:42.285499 2806 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:42.290228 kubelet[2806]: E0307 01:15:42.286914 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-242?timeout=10s\": dial tcp 172.31.20.242:6443: connect: connection refused" interval="200ms" Mar 7 01:15:42.290228 kubelet[2806]: I0307 01:15:42.287395 2806 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:42.302034 kubelet[2806]: I0307 01:15:42.300924 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:42.304042 kubelet[2806]: I0307 01:15:42.303953 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:42.304042 kubelet[2806]: I0307 01:15:42.304005 2806 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:15:42.304537 kubelet[2806]: I0307 01:15:42.304195 2806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:42.304537 kubelet[2806]: I0307 01:15:42.304211 2806 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:15:42.304537 kubelet[2806]: E0307 01:15:42.304264 2806 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:42.315936 kubelet[2806]: E0307 01:15:42.315901 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:42.326029 kubelet[2806]: I0307 01:15:42.325861 2806 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:42.326029 kubelet[2806]: I0307 01:15:42.325876 2806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:42.326029 kubelet[2806]: I0307 01:15:42.325892 2806 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:42.330924 kubelet[2806]: I0307 01:15:42.330861 2806 policy_none.go:49] "None policy: Start" Mar 7 01:15:42.330924 kubelet[2806]: I0307 01:15:42.330923 2806 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:15:42.331129 kubelet[2806]: I0307 01:15:42.330953 2806 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:15:42.339876 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:15:42.357075 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:15:42.361625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:15:42.377699 kubelet[2806]: E0307 01:15:42.377116 2806 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:42.377699 kubelet[2806]: I0307 01:15:42.377444 2806 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:42.377699 kubelet[2806]: I0307 01:15:42.377461 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:42.378026 kubelet[2806]: I0307 01:15:42.377766 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:42.381641 kubelet[2806]: E0307 01:15:42.380420 2806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:42.381641 kubelet[2806]: E0307 01:15:42.380472 2806 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-242\" not found" Mar 7 01:15:42.420546 systemd[1]: Created slice kubepods-burstable-pod58c09cda5ffb0e74034548c7f933b43f.slice - libcontainer container kubepods-burstable-pod58c09cda5ffb0e74034548c7f933b43f.slice. Mar 7 01:15:42.429081 kubelet[2806]: E0307 01:15:42.428838 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:42.432803 systemd[1]: Created slice kubepods-burstable-pod0a34aebf4d95aba7cf339ab4d17bde81.slice - libcontainer container kubepods-burstable-pod0a34aebf4d95aba7cf339ab4d17bde81.slice. Mar 7 01:15:42.435385 kubelet[2806]: E0307 01:15:42.435343 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:42.444426 systemd[1]: Created slice kubepods-burstable-pod191533c469bec25ef3012c578e509700.slice - libcontainer container kubepods-burstable-pod191533c469bec25ef3012c578e509700.slice. Mar 7 01:15:42.447618 kubelet[2806]: E0307 01:15:42.447581 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:42.479350 kubelet[2806]: I0307 01:15:42.479319 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-242" Mar 7 01:15:42.479781 kubelet[2806]: E0307 01:15:42.479748 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.242:6443/api/v1/nodes\": dial tcp 172.31.20.242:6443: connect: connection refused" node="ip-172-31-20-242" Mar 7 01:15:42.487529 kubelet[2806]: E0307 01:15:42.487472 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-242?timeout=10s\": dial tcp 172.31.20.242:6443: connect: connection refused" interval="400ms" Mar 7 01:15:42.585997 kubelet[2806]: I0307 01:15:42.585918 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:42.585997 kubelet[2806]: I0307 01:15:42.585997 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/191533c469bec25ef3012c578e509700-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-242\" (UID: \"191533c469bec25ef3012c578e509700\") " pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:42.586229 kubelet[2806]: I0307 01:15:42.586037 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58c09cda5ffb0e74034548c7f933b43f-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-242\" (UID: \"58c09cda5ffb0e74034548c7f933b43f\") " pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:42.586229 kubelet[2806]: I0307 01:15:42.586059 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:42.586229 kubelet[2806]: I0307 01:15:42.586082 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:42.586229 kubelet[2806]: I0307 01:15:42.586104 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58c09cda5ffb0e74034548c7f933b43f-ca-certs\") pod \"kube-apiserver-ip-172-31-20-242\" (UID: \"58c09cda5ffb0e74034548c7f933b43f\") " pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:42.586229 kubelet[2806]: I0307 01:15:42.586128 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58c09cda5ffb0e74034548c7f933b43f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-242\" (UID: \"58c09cda5ffb0e74034548c7f933b43f\") " pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:42.586403 kubelet[2806]: I0307 01:15:42.586150 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:42.586403 kubelet[2806]: I0307 01:15:42.586194 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:42.682002 kubelet[2806]: I0307 01:15:42.681842 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-242" Mar 7 01:15:42.682334 kubelet[2806]: E0307 01:15:42.682288 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.242:6443/api/v1/nodes\": dial tcp 172.31.20.242:6443: connect: connection refused" node="ip-172-31-20-242" Mar 7 01:15:42.732066 containerd[1963]: time="2026-03-07T01:15:42.732002925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-242,Uid:58c09cda5ffb0e74034548c7f933b43f,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:42.736894 containerd[1963]: time="2026-03-07T01:15:42.736833046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-242,Uid:0a34aebf4d95aba7cf339ab4d17bde81,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:42.750056 containerd[1963]: time="2026-03-07T01:15:42.749929189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-242,Uid:191533c469bec25ef3012c578e509700,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:42.888562 kubelet[2806]: E0307 01:15:42.888495 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-242?timeout=10s\": dial tcp 172.31.20.242:6443: connect: connection refused" interval="800ms" Mar 7 01:15:43.085132 kubelet[2806]: I0307 01:15:43.085028 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-242" Mar 7 01:15:43.085559 kubelet[2806]: E0307 01:15:43.085522 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.242:6443/api/v1/nodes\": dial tcp 172.31.20.242:6443: connect: connection refused" node="ip-172-31-20-242" Mar 7 01:15:43.162204 kubelet[2806]: E0307 01:15:43.162160 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:43.237380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276839686.mount: Deactivated successfully. Mar 7 01:15:43.252580 containerd[1963]: time="2026-03-07T01:15:43.251233638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:43.258172 containerd[1963]: time="2026-03-07T01:15:43.258109445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:15:43.260059 containerd[1963]: time="2026-03-07T01:15:43.259993709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:43.262405 containerd[1963]: time="2026-03-07T01:15:43.262355544Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:43.263956 containerd[1963]: time="2026-03-07T01:15:43.263909527Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:43.266152 containerd[1963]: time="2026-03-07T01:15:43.266071584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:43.268338 containerd[1963]: time="2026-03-07T01:15:43.268288165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:43.270395 containerd[1963]: time="2026-03-07T01:15:43.270319034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:43.271416 containerd[1963]: time="2026-03-07T01:15:43.271375882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.238469ms" Mar 7 01:15:43.275027 containerd[1963]: time="2026-03-07T01:15:43.274270559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.216736ms" Mar 7 01:15:43.279029 containerd[1963]: time="2026-03-07T01:15:43.278674173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.729768ms" Mar 7 01:15:43.474407 containerd[1963]: time="2026-03-07T01:15:43.474285222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:43.474407 containerd[1963]: time="2026-03-07T01:15:43.474367211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:43.477101 containerd[1963]: time="2026-03-07T01:15:43.474391229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:43.477101 containerd[1963]: time="2026-03-07T01:15:43.474496132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:43.484779 containerd[1963]: time="2026-03-07T01:15:43.484669023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:43.484779 containerd[1963]: time="2026-03-07T01:15:43.484748471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:43.486208 containerd[1963]: time="2026-03-07T01:15:43.484959770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:43.490166 containerd[1963]: time="2026-03-07T01:15:43.488333214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:43.504541 containerd[1963]: time="2026-03-07T01:15:43.504069621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:43.504541 containerd[1963]: time="2026-03-07T01:15:43.504131963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:43.504541 containerd[1963]: time="2026-03-07T01:15:43.504146693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:43.504541 containerd[1963]: time="2026-03-07T01:15:43.504247364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:43.532803 systemd[1]: Started cri-containerd-873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9.scope - libcontainer container 873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9. Mar 7 01:15:43.543180 systemd[1]: Started cri-containerd-83fff69928d727636ad62bd484990aaa6cb5e2bde7f46bd64ba893f2ea838ded.scope - libcontainer container 83fff69928d727636ad62bd484990aaa6cb5e2bde7f46bd64ba893f2ea838ded. Mar 7 01:15:43.557921 systemd[1]: Started cri-containerd-7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994.scope - libcontainer container 7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994. Mar 7 01:15:43.630998 containerd[1963]: time="2026-03-07T01:15:43.629579261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-242,Uid:58c09cda5ffb0e74034548c7f933b43f,Namespace:kube-system,Attempt:0,} returns sandbox id \"83fff69928d727636ad62bd484990aaa6cb5e2bde7f46bd64ba893f2ea838ded\"" Mar 7 01:15:43.652840 containerd[1963]: time="2026-03-07T01:15:43.652556921Z" level=info msg="CreateContainer within sandbox \"83fff69928d727636ad62bd484990aaa6cb5e2bde7f46bd64ba893f2ea838ded\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:15:43.679499 containerd[1963]: time="2026-03-07T01:15:43.679442349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-242,Uid:0a34aebf4d95aba7cf339ab4d17bde81,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994\"" Mar 7 01:15:43.680828 containerd[1963]: time="2026-03-07T01:15:43.680787821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-242,Uid:191533c469bec25ef3012c578e509700,Namespace:kube-system,Attempt:0,} returns sandbox id \"873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9\"" Mar 7 01:15:43.688838 containerd[1963]: time="2026-03-07T01:15:43.688647014Z" level=info msg="CreateContainer within sandbox \"7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:15:43.689995 kubelet[2806]: E0307 01:15:43.689675 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-242?timeout=10s\": dial tcp 172.31.20.242:6443: connect: connection refused" interval="1.6s" Mar 7 01:15:43.695869 containerd[1963]: time="2026-03-07T01:15:43.695824477Z" level=info msg="CreateContainer within sandbox \"873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:15:43.713526 containerd[1963]: time="2026-03-07T01:15:43.713476571Z" level=info msg="CreateContainer within sandbox \"83fff69928d727636ad62bd484990aaa6cb5e2bde7f46bd64ba893f2ea838ded\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"386deb4cb9be7c9d35f4383c09cbb70c44642f69932d93ab57d84821db412972\"" Mar 7 01:15:43.715750 containerd[1963]: time="2026-03-07T01:15:43.714355697Z" level=info msg="StartContainer for \"386deb4cb9be7c9d35f4383c09cbb70c44642f69932d93ab57d84821db412972\"" Mar 7 01:15:43.726433 kubelet[2806]: E0307 01:15:43.725222 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-242&limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:43.737655 containerd[1963]: time="2026-03-07T01:15:43.737605041Z" level=info msg="CreateContainer within sandbox \"7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0\"" Mar 7 01:15:43.739021 containerd[1963]: time="2026-03-07T01:15:43.738958144Z" level=info msg="StartContainer for \"2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0\"" Mar 7 01:15:43.748962 containerd[1963]: time="2026-03-07T01:15:43.748521856Z" level=info msg="CreateContainer within sandbox \"873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac\"" Mar 7 01:15:43.750823 containerd[1963]: time="2026-03-07T01:15:43.749406519Z" level=info msg="StartContainer for \"e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac\"" Mar 7 01:15:43.759229 systemd[1]: Started cri-containerd-386deb4cb9be7c9d35f4383c09cbb70c44642f69932d93ab57d84821db412972.scope - libcontainer container 386deb4cb9be7c9d35f4383c09cbb70c44642f69932d93ab57d84821db412972. Mar 7 01:15:43.806247 systemd[1]: Started cri-containerd-2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0.scope - libcontainer container 2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0. Mar 7 01:15:43.817193 kubelet[2806]: E0307 01:15:43.817163 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:43.819212 systemd[1]: Started cri-containerd-e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac.scope - libcontainer container e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac. Mar 7 01:15:43.831310 kubelet[2806]: E0307 01:15:43.831056 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:43.879030 containerd[1963]: time="2026-03-07T01:15:43.878832473Z" level=info msg="StartContainer for \"386deb4cb9be7c9d35f4383c09cbb70c44642f69932d93ab57d84821db412972\" returns successfully" Mar 7 01:15:43.892133 kubelet[2806]: I0307 01:15:43.890044 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-242" Mar 7 01:15:43.892846 kubelet[2806]: E0307 01:15:43.892784 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.242:6443/api/v1/nodes\": dial tcp 172.31.20.242:6443: connect: connection refused" node="ip-172-31-20-242" Mar 7 01:15:43.907110 containerd[1963]: time="2026-03-07T01:15:43.907053177Z" level=info msg="StartContainer for \"e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac\" returns successfully" Mar 7 01:15:43.919452 containerd[1963]: time="2026-03-07T01:15:43.918734806Z" level=info msg="StartContainer for \"2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0\" returns successfully" Mar 7 01:15:44.245564 kubelet[2806]: E0307 01:15:44.245509 2806 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.242:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:44.336009 kubelet[2806]: E0307 01:15:44.335442 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:44.336009 kubelet[2806]: E0307 01:15:44.335801 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:44.340243 kubelet[2806]: E0307 01:15:44.340219 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:45.018080 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:15:45.343949 kubelet[2806]: E0307 01:15:45.342958 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:45.343949 kubelet[2806]: E0307 01:15:45.343437 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:45.495859 kubelet[2806]: I0307 01:15:45.495036 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-242" Mar 7 01:15:47.591628 kubelet[2806]: E0307 01:15:47.591574 2806 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-242\" not found" node="ip-172-31-20-242" Mar 7 01:15:47.610361 kubelet[2806]: I0307 01:15:47.610126 2806 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-242" Mar 7 01:15:47.610361 kubelet[2806]: E0307 01:15:47.610199 2806 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-242\": node \"ip-172-31-20-242\" not found" Mar 7 01:15:47.652708 kubelet[2806]: E0307 01:15:47.652591 2806 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-242.189a6a2c66648e0b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-242,UID:ip-172-31-20-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-242,},FirstTimestamp:2026-03-07 01:15:42.252682763 +0000 UTC m=+0.616553719,LastTimestamp:2026-03-07 01:15:42.252682763 +0000 UTC m=+0.616553719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-242,}" Mar 7 01:15:47.687081 kubelet[2806]: I0307 01:15:47.687038 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:47.708368 kubelet[2806]: E0307 01:15:47.708152 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-242\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:47.708368 kubelet[2806]: I0307 01:15:47.708186 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:47.713774 kubelet[2806]: E0307 01:15:47.713543 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-242\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:47.713774 kubelet[2806]: I0307 01:15:47.713583 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:47.718209 kubelet[2806]: E0307 01:15:47.718159 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-242\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:48.245611 kubelet[2806]: I0307 01:15:48.245557 2806 apiserver.go:52] "Watching apiserver" Mar 7 01:15:48.284650 kubelet[2806]: I0307 01:15:48.284371 2806 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:15:49.597696 systemd[1]: Reloading requested from client PID 3091 ('systemctl') (unit session-7.scope)... Mar 7 01:15:49.597721 systemd[1]: Reloading... Mar 7 01:15:49.682506 kubelet[2806]: I0307 01:15:49.682458 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:49.727097 zram_generator::config[3131]: No configuration found. Mar 7 01:15:49.864772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:49.968532 systemd[1]: Reloading finished in 370 ms. Mar 7 01:15:50.018282 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:50.033799 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:15:50.034118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:50.034191 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 131.3M memory peak, 0B memory swap peak. Mar 7 01:15:50.042408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:50.278121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:50.290579 (kubelet)[3191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:50.357723 kubelet[3191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:50.357723 kubelet[3191]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:50.357723 kubelet[3191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:50.358730 kubelet[3191]: I0307 01:15:50.357856 3191 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:50.370882 kubelet[3191]: I0307 01:15:50.370841 3191 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:15:50.371144 kubelet[3191]: I0307 01:15:50.371108 3191 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:50.372221 kubelet[3191]: I0307 01:15:50.371873 3191 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:50.374109 kubelet[3191]: I0307 01:15:50.374072 3191 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:15:50.377963 kubelet[3191]: I0307 01:15:50.376556 3191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:50.386289 kubelet[3191]: E0307 01:15:50.386242 3191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:50.386487 kubelet[3191]: I0307 01:15:50.386476 3191 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:50.393318 kubelet[3191]: I0307 01:15:50.392274 3191 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:15:50.393318 kubelet[3191]: I0307 01:15:50.392538 3191 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:50.393318 kubelet[3191]: I0307 01:15:50.392573 3191 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-242","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:15:50.393318 kubelet[3191]: I0307 01:15:50.392892 3191 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:50.393586 kubelet[3191]: I0307 01:15:50.392905 3191 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:15:50.394924 kubelet[3191]: I0307 01:15:50.394896 3191 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:50.397381 kubelet[3191]: I0307 01:15:50.397341 3191 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:15:50.397381 kubelet[3191]: I0307 01:15:50.397380 3191 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:50.397529 kubelet[3191]: I0307 01:15:50.397413 3191 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:15:50.397529 kubelet[3191]: I0307 01:15:50.397432 3191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:50.402190 kubelet[3191]: I0307 01:15:50.402166 3191 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:50.403359 kubelet[3191]: I0307 01:15:50.403048 3191 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:50.408733 kubelet[3191]: I0307 01:15:50.408652 3191 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:15:50.408733 kubelet[3191]: I0307 01:15:50.408699 3191 server.go:1289] "Started kubelet" Mar 7 01:15:50.419131 kubelet[3191]: I0307 01:15:50.419106 3191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:50.423799 kubelet[3191]: I0307 01:15:50.422846 3191 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:50.424616 kubelet[3191]: I0307 01:15:50.424598 3191 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:15:50.433586 kubelet[3191]: I0307 01:15:50.433512 3191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:50.434163 kubelet[3191]: I0307 01:15:50.434141 3191 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:50.435037 kubelet[3191]: I0307 01:15:50.434528 3191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:50.437227 kubelet[3191]: I0307 01:15:50.437207 3191 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:15:50.437489 kubelet[3191]: E0307 01:15:50.437470 3191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-242\" not found" Mar 7 01:15:50.438220 kubelet[3191]: I0307 01:15:50.438187 3191 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:50.439489 kubelet[3191]: I0307 01:15:50.438929 3191 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:15:50.439718 kubelet[3191]: I0307 01:15:50.439705 3191 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:15:50.447176 kubelet[3191]: I0307 01:15:50.447148 3191 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:50.447468 kubelet[3191]: I0307 01:15:50.447444 3191 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:50.451251 kubelet[3191]: E0307 01:15:50.451217 3191 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:50.454816 kubelet[3191]: I0307 01:15:50.454792 3191 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:50.476577 kubelet[3191]: I0307 01:15:50.476545 3191 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:50.477003 kubelet[3191]: I0307 01:15:50.476742 3191 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:15:50.477003 kubelet[3191]: I0307 01:15:50.476771 3191 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:50.477003 kubelet[3191]: I0307 01:15:50.476781 3191 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:15:50.477003 kubelet[3191]: E0307 01:15:50.476846 3191 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:50.515405 kubelet[3191]: I0307 01:15:50.515385 3191 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:50.515637 kubelet[3191]: I0307 01:15:50.515626 3191 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515717 3191 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515842 3191 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515851 3191 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515869 3191 policy_none.go:49] "None policy: Start" Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515880 3191 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515889 3191 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:15:50.516022 kubelet[3191]: I0307 01:15:50.515970 3191 state_mem.go:75] "Updated machine memory state" Mar 7 01:15:50.520809 kubelet[3191]: E0307 01:15:50.520777 3191 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:50.521050 kubelet[3191]: I0307 01:15:50.521030 3191 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:50.521245 kubelet[3191]: I0307 01:15:50.521053 3191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:50.522457 kubelet[3191]: I0307 01:15:50.521941 3191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:50.525602 kubelet[3191]: E0307 01:15:50.525572 3191 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:50.579825 kubelet[3191]: I0307 01:15:50.578819 3191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:50.582044 kubelet[3191]: I0307 01:15:50.580081 3191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:50.583793 kubelet[3191]: I0307 01:15:50.582682 3191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:50.597652 kubelet[3191]: E0307 01:15:50.597433 3191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-242\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:50.626615 kubelet[3191]: I0307 01:15:50.626149 3191 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-242" Mar 7 01:15:50.638098 kubelet[3191]: I0307 01:15:50.638055 3191 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-242" Mar 7 01:15:50.638225 kubelet[3191]: I0307 01:15:50.638152 3191 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-242" Mar 7 01:15:50.643007 kubelet[3191]: I0307 01:15:50.642934 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58c09cda5ffb0e74034548c7f933b43f-ca-certs\") pod \"kube-apiserver-ip-172-31-20-242\" (UID: \"58c09cda5ffb0e74034548c7f933b43f\") " pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:50.643007 kubelet[3191]: I0307 01:15:50.642989 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58c09cda5ffb0e74034548c7f933b43f-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-242\" (UID: \"58c09cda5ffb0e74034548c7f933b43f\") " pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:50.644411 kubelet[3191]: I0307 01:15:50.643014 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:50.644411 kubelet[3191]: I0307 01:15:50.643038 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:50.644411 kubelet[3191]: I0307 01:15:50.643064 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:50.644411 kubelet[3191]: I0307 01:15:50.643092 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58c09cda5ffb0e74034548c7f933b43f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-242\" (UID: \"58c09cda5ffb0e74034548c7f933b43f\") " pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:50.644411 kubelet[3191]: I0307 01:15:50.643117 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:50.645677 kubelet[3191]: I0307 01:15:50.643139 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a34aebf4d95aba7cf339ab4d17bde81-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-242\" (UID: \"0a34aebf4d95aba7cf339ab4d17bde81\") " pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:50.645677 kubelet[3191]: I0307 01:15:50.643163 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/191533c469bec25ef3012c578e509700-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-242\" (UID: \"191533c469bec25ef3012c578e509700\") " pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:51.406725 kubelet[3191]: I0307 01:15:51.406675 3191 apiserver.go:52] "Watching apiserver" Mar 7 01:15:51.440242 kubelet[3191]: I0307 01:15:51.440190 3191 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:15:51.497276 kubelet[3191]: I0307 01:15:51.496952 3191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:51.499782 kubelet[3191]: I0307 01:15:51.499374 3191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:51.499782 kubelet[3191]: I0307 01:15:51.499750 3191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:51.513025 kubelet[3191]: E0307 01:15:51.508970 3191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-242\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-242" Mar 7 01:15:51.514387 kubelet[3191]: E0307 01:15:51.514353 3191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-242\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-242" Mar 7 01:15:51.515819 kubelet[3191]: E0307 01:15:51.515780 3191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-242\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-242" Mar 7 01:15:51.549726 kubelet[3191]: I0307 01:15:51.549546 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-242" podStartSLOduration=1.54952845 podStartE2EDuration="1.54952845s" podCreationTimestamp="2026-03-07 01:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:51.545869182 +0000 UTC m=+1.246935529" watchObservedRunningTime="2026-03-07 01:15:51.54952845 +0000 UTC m=+1.250594799" Mar 7 01:15:51.572862 kubelet[3191]: I0307 01:15:51.572541 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-242" podStartSLOduration=1.572518789 podStartE2EDuration="1.572518789s" podCreationTimestamp="2026-03-07 01:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:51.55988827 +0000 UTC m=+1.260954618" watchObservedRunningTime="2026-03-07 01:15:51.572518789 +0000 UTC m=+1.273585141" Mar 7 01:15:51.590420 kubelet[3191]: I0307 01:15:51.590346 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-242" podStartSLOduration=2.5903230600000002 podStartE2EDuration="2.59032306s" podCreationTimestamp="2026-03-07 01:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:51.573347481 +0000 UTC m=+1.274413820" watchObservedRunningTime="2026-03-07 01:15:51.59032306 +0000 UTC m=+1.291389402" Mar 7 01:15:55.847581 kubelet[3191]: I0307 01:15:55.847404 3191 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:15:55.854005 containerd[1963]: time="2026-03-07T01:15:55.853181752Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:15:55.855286 kubelet[3191]: I0307 01:15:55.854875 3191 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:15:57.029654 systemd[1]: Created slice kubepods-besteffort-pod704d530b_8844_4cb1_9cb3_0ae1c08724ac.slice - libcontainer container kubepods-besteffort-pod704d530b_8844_4cb1_9cb3_0ae1c08724ac.slice. Mar 7 01:15:57.088102 kubelet[3191]: I0307 01:15:57.088001 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/704d530b-8844-4cb1-9cb3-0ae1c08724ac-kube-proxy\") pod \"kube-proxy-k6lv7\" (UID: \"704d530b-8844-4cb1-9cb3-0ae1c08724ac\") " pod="kube-system/kube-proxy-k6lv7" Mar 7 01:15:57.088102 kubelet[3191]: I0307 01:15:57.088065 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/704d530b-8844-4cb1-9cb3-0ae1c08724ac-lib-modules\") pod \"kube-proxy-k6lv7\" (UID: \"704d530b-8844-4cb1-9cb3-0ae1c08724ac\") " pod="kube-system/kube-proxy-k6lv7" Mar 7 01:15:57.088102 kubelet[3191]: I0307 01:15:57.088093 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwkj\" (UniqueName: \"kubernetes.io/projected/704d530b-8844-4cb1-9cb3-0ae1c08724ac-kube-api-access-wrwkj\") pod \"kube-proxy-k6lv7\" (UID: \"704d530b-8844-4cb1-9cb3-0ae1c08724ac\") " pod="kube-system/kube-proxy-k6lv7" Mar 7 01:15:57.088597 kubelet[3191]: I0307 01:15:57.088124 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/704d530b-8844-4cb1-9cb3-0ae1c08724ac-xtables-lock\") pod \"kube-proxy-k6lv7\" (UID: \"704d530b-8844-4cb1-9cb3-0ae1c08724ac\") " pod="kube-system/kube-proxy-k6lv7" Mar 7 01:15:57.182559 systemd[1]: Created slice kubepods-besteffort-pod03e453cc_32a7_48d0_87b3_cad4d2a0dd5e.slice - libcontainer container kubepods-besteffort-pod03e453cc_32a7_48d0_87b3_cad4d2a0dd5e.slice. Mar 7 01:15:57.189272 kubelet[3191]: I0307 01:15:57.189235 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03e453cc-32a7-48d0-87b3-cad4d2a0dd5e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-5gtrt\" (UID: \"03e453cc-32a7-48d0-87b3-cad4d2a0dd5e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-5gtrt" Mar 7 01:15:57.189394 kubelet[3191]: I0307 01:15:57.189326 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9s9\" (UniqueName: \"kubernetes.io/projected/03e453cc-32a7-48d0-87b3-cad4d2a0dd5e-kube-api-access-rs9s9\") pod \"tigera-operator-6bf85f8dd-5gtrt\" (UID: \"03e453cc-32a7-48d0-87b3-cad4d2a0dd5e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-5gtrt" Mar 7 01:15:57.339341 containerd[1963]: time="2026-03-07T01:15:57.339214445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6lv7,Uid:704d530b-8844-4cb1-9cb3-0ae1c08724ac,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:57.389916 containerd[1963]: time="2026-03-07T01:15:57.389780150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:57.390100 containerd[1963]: time="2026-03-07T01:15:57.389921173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:57.390100 containerd[1963]: time="2026-03-07T01:15:57.389953633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:57.390203 containerd[1963]: time="2026-03-07T01:15:57.390157382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:57.422217 systemd[1]: Started cri-containerd-d14090aa73dd1e7070d57658d09ebb5c6ff8c2a620a41ede1daefc3b53385ca2.scope - libcontainer container d14090aa73dd1e7070d57658d09ebb5c6ff8c2a620a41ede1daefc3b53385ca2. Mar 7 01:15:57.452461 containerd[1963]: time="2026-03-07T01:15:57.452412560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6lv7,Uid:704d530b-8844-4cb1-9cb3-0ae1c08724ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"d14090aa73dd1e7070d57658d09ebb5c6ff8c2a620a41ede1daefc3b53385ca2\"" Mar 7 01:15:57.461552 containerd[1963]: time="2026-03-07T01:15:57.461508999Z" level=info msg="CreateContainer within sandbox \"d14090aa73dd1e7070d57658d09ebb5c6ff8c2a620a41ede1daefc3b53385ca2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:15:57.488178 containerd[1963]: time="2026-03-07T01:15:57.488128269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-5gtrt,Uid:03e453cc-32a7-48d0-87b3-cad4d2a0dd5e,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:15:57.490168 containerd[1963]: time="2026-03-07T01:15:57.490122674Z" level=info msg="CreateContainer within sandbox \"d14090aa73dd1e7070d57658d09ebb5c6ff8c2a620a41ede1daefc3b53385ca2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f21c66d537afb6f21f2579be716e61fde5348c2ed6a6e31671e6befcf6a7a9f3\"" Mar 7 01:15:57.492123 containerd[1963]: time="2026-03-07T01:15:57.491007108Z" level=info msg="StartContainer for \"f21c66d537afb6f21f2579be716e61fde5348c2ed6a6e31671e6befcf6a7a9f3\"" Mar 7 01:15:57.541336 systemd[1]: Started cri-containerd-f21c66d537afb6f21f2579be716e61fde5348c2ed6a6e31671e6befcf6a7a9f3.scope - libcontainer container f21c66d537afb6f21f2579be716e61fde5348c2ed6a6e31671e6befcf6a7a9f3. Mar 7 01:15:57.559495 containerd[1963]: time="2026-03-07T01:15:57.559136798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:57.559495 containerd[1963]: time="2026-03-07T01:15:57.559219805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:57.559495 containerd[1963]: time="2026-03-07T01:15:57.559255996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:57.559495 containerd[1963]: time="2026-03-07T01:15:57.559367609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:57.587292 systemd[1]: Started cri-containerd-da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b.scope - libcontainer container da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b. Mar 7 01:15:57.613836 containerd[1963]: time="2026-03-07T01:15:57.613712023Z" level=info msg="StartContainer for \"f21c66d537afb6f21f2579be716e61fde5348c2ed6a6e31671e6befcf6a7a9f3\" returns successfully" Mar 7 01:15:57.668631 containerd[1963]: time="2026-03-07T01:15:57.668587931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-5gtrt,Uid:03e453cc-32a7-48d0-87b3-cad4d2a0dd5e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b\"" Mar 7 01:15:57.674244 containerd[1963]: time="2026-03-07T01:15:57.674195257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:15:58.534877 kubelet[3191]: I0307 01:15:58.534731 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k6lv7" podStartSLOduration=2.534690188 podStartE2EDuration="2.534690188s" podCreationTimestamp="2026-03-07 01:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:58.534662976 +0000 UTC m=+8.235729324" watchObservedRunningTime="2026-03-07 01:15:58.534690188 +0000 UTC m=+8.235756531" Mar 7 01:15:58.967309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673009126.mount: Deactivated successfully. Mar 7 01:15:59.493272 update_engine[1958]: I20260307 01:15:59.493181 1958 update_attempter.cc:509] Updating boot flags... Mar 7 01:15:59.564047 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3510) Mar 7 01:16:03.579869 containerd[1963]: time="2026-03-07T01:16:03.579810454Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:03.582039 containerd[1963]: time="2026-03-07T01:16:03.581785104Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:16:03.584695 containerd[1963]: time="2026-03-07T01:16:03.584244291Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:03.588575 containerd[1963]: time="2026-03-07T01:16:03.588445106Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:03.589831 containerd[1963]: time="2026-03-07T01:16:03.589784135Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.915540623s" Mar 7 01:16:03.589969 containerd[1963]: time="2026-03-07T01:16:03.589835576Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:16:03.596796 containerd[1963]: time="2026-03-07T01:16:03.596752590Z" level=info msg="CreateContainer within sandbox \"da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:16:03.622239 containerd[1963]: time="2026-03-07T01:16:03.622187726Z" level=info msg="CreateContainer within sandbox \"da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e\"" Mar 7 01:16:03.624187 containerd[1963]: time="2026-03-07T01:16:03.624144529Z" level=info msg="StartContainer for \"8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e\"" Mar 7 01:16:03.665622 systemd[1]: Started cri-containerd-8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e.scope - libcontainer container 8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e. Mar 7 01:16:03.703192 containerd[1963]: time="2026-03-07T01:16:03.703016741Z" level=info msg="StartContainer for \"8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e\" returns successfully" Mar 7 01:16:11.173172 sudo[2296]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:11.254527 sshd[2293]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:11.265453 systemd[1]: sshd@6-172.31.20.242:22-68.220.241.50:47894.service: Deactivated successfully. Mar 7 01:16:11.269445 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:16:11.269687 systemd[1]: session-7.scope: Consumed 5.280s CPU time, 143.9M memory peak, 0B memory swap peak. Mar 7 01:16:11.277246 systemd-logind[1957]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:16:11.280575 systemd-logind[1957]: Removed session 7. Mar 7 01:16:14.641609 kubelet[3191]: I0307 01:16:14.639405 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-5gtrt" podStartSLOduration=11.719051125 podStartE2EDuration="17.639381928s" podCreationTimestamp="2026-03-07 01:15:57 +0000 UTC" firstStartedPulling="2026-03-07 01:15:57.670661853 +0000 UTC m=+7.371728180" lastFinishedPulling="2026-03-07 01:16:03.590992578 +0000 UTC m=+13.292058983" observedRunningTime="2026-03-07 01:16:04.570894569 +0000 UTC m=+14.271960918" watchObservedRunningTime="2026-03-07 01:16:14.639381928 +0000 UTC m=+24.340448277" Mar 7 01:16:14.662440 systemd[1]: Created slice kubepods-besteffort-pod21128cc0_1755_451f_8f5f_5abe3d3a011c.slice - libcontainer container kubepods-besteffort-pod21128cc0_1755_451f_8f5f_5abe3d3a011c.slice. Mar 7 01:16:14.746199 kubelet[3191]: I0307 01:16:14.746154 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmcl\" (UniqueName: \"kubernetes.io/projected/21128cc0-1755-451f-8f5f-5abe3d3a011c-kube-api-access-fqmcl\") pod \"calico-typha-599d495554-x5ph2\" (UID: \"21128cc0-1755-451f-8f5f-5abe3d3a011c\") " pod="calico-system/calico-typha-599d495554-x5ph2" Mar 7 01:16:14.746474 kubelet[3191]: I0307 01:16:14.746456 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21128cc0-1755-451f-8f5f-5abe3d3a011c-tigera-ca-bundle\") pod \"calico-typha-599d495554-x5ph2\" (UID: \"21128cc0-1755-451f-8f5f-5abe3d3a011c\") " pod="calico-system/calico-typha-599d495554-x5ph2" Mar 7 01:16:14.746644 kubelet[3191]: I0307 01:16:14.746598 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/21128cc0-1755-451f-8f5f-5abe3d3a011c-typha-certs\") pod \"calico-typha-599d495554-x5ph2\" (UID: \"21128cc0-1755-451f-8f5f-5abe3d3a011c\") " pod="calico-system/calico-typha-599d495554-x5ph2" Mar 7 01:16:14.843187 systemd[1]: Created slice kubepods-besteffort-poda058e96a_678d_4e20_9459_d0cdd0a606fa.slice - libcontainer container kubepods-besteffort-poda058e96a_678d_4e20_9459_d0cdd0a606fa.slice. Mar 7 01:16:14.948992 kubelet[3191]: I0307 01:16:14.948426 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-cni-bin-dir\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.948992 kubelet[3191]: I0307 01:16:14.948521 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a058e96a-678d-4e20-9459-d0cdd0a606fa-node-certs\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.948992 kubelet[3191]: I0307 01:16:14.948549 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-policysync\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.948992 kubelet[3191]: I0307 01:16:14.948611 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-sys-fs\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.950961 kubelet[3191]: I0307 01:16:14.949905 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-var-lib-calico\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.950961 kubelet[3191]: I0307 01:16:14.950004 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-xtables-lock\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.950961 kubelet[3191]: I0307 01:16:14.950044 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-lib-modules\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.950961 kubelet[3191]: I0307 01:16:14.950069 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-bpffs\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.950961 kubelet[3191]: I0307 01:16:14.950117 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-flexvol-driver-host\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.951448 kubelet[3191]: I0307 01:16:14.950139 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-nodeproc\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.951448 kubelet[3191]: I0307 01:16:14.950195 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a058e96a-678d-4e20-9459-d0cdd0a606fa-tigera-ca-bundle\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.951448 kubelet[3191]: I0307 01:16:14.950219 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-cni-log-dir\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.951448 kubelet[3191]: I0307 01:16:14.950927 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-var-run-calico\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.951796 kubelet[3191]: I0307 01:16:14.951680 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzw8\" (UniqueName: \"kubernetes.io/projected/a058e96a-678d-4e20-9459-d0cdd0a606fa-kube-api-access-txzw8\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.951796 kubelet[3191]: I0307 01:16:14.951728 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a058e96a-678d-4e20-9459-d0cdd0a606fa-cni-net-dir\") pod \"calico-node-mnlx6\" (UID: \"a058e96a-678d-4e20-9459-d0cdd0a606fa\") " pod="calico-system/calico-node-mnlx6" Mar 7 01:16:14.977742 containerd[1963]: time="2026-03-07T01:16:14.977588787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599d495554-x5ph2,Uid:21128cc0-1755-451f-8f5f-5abe3d3a011c,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:15.004831 kubelet[3191]: E0307 01:16:14.997752 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:15.053719 kubelet[3191]: I0307 01:16:15.053676 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jkx\" (UniqueName: \"kubernetes.io/projected/9aed3645-1ca9-4273-9dbb-5a5fa746e5c3-kube-api-access-t9jkx\") pod \"csi-node-driver-kdrcv\" (UID: \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\") " pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:15.053864 kubelet[3191]: I0307 01:16:15.053811 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9aed3645-1ca9-4273-9dbb-5a5fa746e5c3-registration-dir\") pod \"csi-node-driver-kdrcv\" (UID: \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\") " pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:15.053864 kubelet[3191]: I0307 01:16:15.053856 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9aed3645-1ca9-4273-9dbb-5a5fa746e5c3-socket-dir\") pod \"csi-node-driver-kdrcv\" (UID: \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\") " pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:15.055739 kubelet[3191]: I0307 01:16:15.054084 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9aed3645-1ca9-4273-9dbb-5a5fa746e5c3-kubelet-dir\") pod \"csi-node-driver-kdrcv\" (UID: \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\") " pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:15.055739 kubelet[3191]: I0307 01:16:15.054133 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9aed3645-1ca9-4273-9dbb-5a5fa746e5c3-varrun\") pod \"csi-node-driver-kdrcv\" (UID: \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\") " pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:15.077387 kubelet[3191]: E0307 01:16:15.077321 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.077387 kubelet[3191]: W0307 01:16:15.077384 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.077606 kubelet[3191]: E0307 01:16:15.077429 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.116036 kubelet[3191]: E0307 01:16:15.115990 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.116269 kubelet[3191]: W0307 01:16:15.116039 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.116269 kubelet[3191]: E0307 01:16:15.116072 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.130512 containerd[1963]: time="2026-03-07T01:16:15.130380097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:15.130683 containerd[1963]: time="2026-03-07T01:16:15.130534321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:15.130683 containerd[1963]: time="2026-03-07T01:16:15.130568238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:15.131436 containerd[1963]: time="2026-03-07T01:16:15.130828298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:15.154911 kubelet[3191]: E0307 01:16:15.154869 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.155160 kubelet[3191]: W0307 01:16:15.154971 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.155160 kubelet[3191]: E0307 01:16:15.155010 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.156261 kubelet[3191]: E0307 01:16:15.155837 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.156261 kubelet[3191]: W0307 01:16:15.155879 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.156261 kubelet[3191]: E0307 01:16:15.155908 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.157756 kubelet[3191]: E0307 01:16:15.157430 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.157756 kubelet[3191]: W0307 01:16:15.157444 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.157756 kubelet[3191]: E0307 01:16:15.157455 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.158820 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.160306 kubelet[3191]: W0307 01:16:15.158839 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.158854 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.159203 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.160306 kubelet[3191]: W0307 01:16:15.159215 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.159227 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.159652 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.160306 kubelet[3191]: W0307 01:16:15.159664 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.159689 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.160306 kubelet[3191]: E0307 01:16:15.160061 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.160854 kubelet[3191]: W0307 01:16:15.160203 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.160854 kubelet[3191]: E0307 01:16:15.160217 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.165017 containerd[1963]: time="2026-03-07T01:16:15.164778220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mnlx6,Uid:a058e96a-678d-4e20-9459-d0cdd0a606fa,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:15.171639 kubelet[3191]: E0307 01:16:15.170693 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.171639 kubelet[3191]: W0307 01:16:15.170719 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.171639 kubelet[3191]: E0307 01:16:15.170744 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.171639 kubelet[3191]: E0307 01:16:15.171072 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.171639 kubelet[3191]: W0307 01:16:15.171084 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.171639 kubelet[3191]: E0307 01:16:15.171098 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.171639 kubelet[3191]: E0307 01:16:15.171456 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.171639 kubelet[3191]: W0307 01:16:15.171467 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.171639 kubelet[3191]: E0307 01:16:15.171479 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.173948 kubelet[3191]: E0307 01:16:15.173549 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.173948 kubelet[3191]: W0307 01:16:15.173569 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.173948 kubelet[3191]: E0307 01:16:15.173587 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.174610 kubelet[3191]: E0307 01:16:15.174509 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.174610 kubelet[3191]: W0307 01:16:15.174527 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.174610 kubelet[3191]: E0307 01:16:15.174543 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.174931 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.176807 kubelet[3191]: W0307 01:16:15.174946 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.174960 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.175241 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.176807 kubelet[3191]: W0307 01:16:15.175251 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.175275 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.175528 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.176807 kubelet[3191]: W0307 01:16:15.175538 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.175549 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.176807 kubelet[3191]: E0307 01:16:15.175805 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.177463 kubelet[3191]: W0307 01:16:15.175813 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.177463 kubelet[3191]: E0307 01:16:15.175825 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.177463 kubelet[3191]: E0307 01:16:15.176116 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.177463 kubelet[3191]: W0307 01:16:15.176125 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.177463 kubelet[3191]: E0307 01:16:15.176137 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.177463 kubelet[3191]: E0307 01:16:15.176450 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.177463 kubelet[3191]: W0307 01:16:15.176460 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.177463 kubelet[3191]: E0307 01:16:15.176473 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.177463 kubelet[3191]: E0307 01:16:15.176727 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.177463 kubelet[3191]: W0307 01:16:15.176737 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.176747 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.177131 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.178093 kubelet[3191]: W0307 01:16:15.177141 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.177158 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.177454 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.178093 kubelet[3191]: W0307 01:16:15.177464 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.177476 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.177818 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.178093 kubelet[3191]: W0307 01:16:15.177830 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.178093 kubelet[3191]: E0307 01:16:15.177842 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.178905 kubelet[3191]: E0307 01:16:15.178103 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.178905 kubelet[3191]: W0307 01:16:15.178113 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.178905 kubelet[3191]: E0307 01:16:15.178126 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.178905 kubelet[3191]: E0307 01:16:15.178470 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.178905 kubelet[3191]: W0307 01:16:15.178481 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.178905 kubelet[3191]: E0307 01:16:15.178494 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.180407 kubelet[3191]: E0307 01:16:15.180182 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.180407 kubelet[3191]: W0307 01:16:15.180197 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.180407 kubelet[3191]: E0307 01:16:15.180213 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.198888 kubelet[3191]: E0307 01:16:15.198507 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:15.198888 kubelet[3191]: W0307 01:16:15.198529 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:15.198888 kubelet[3191]: E0307 01:16:15.198574 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:15.207247 systemd[1]: Started cri-containerd-56d2e54d52d2cbb18b300014ad101d415bcbd08ad35a8d9d37ffc0d6ceb36820.scope - libcontainer container 56d2e54d52d2cbb18b300014ad101d415bcbd08ad35a8d9d37ffc0d6ceb36820. Mar 7 01:16:15.261517 containerd[1963]: time="2026-03-07T01:16:15.260531362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:15.264589 containerd[1963]: time="2026-03-07T01:16:15.262502004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:15.264589 containerd[1963]: time="2026-03-07T01:16:15.262567058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:15.264589 containerd[1963]: time="2026-03-07T01:16:15.262737196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:15.290050 systemd[1]: Started cri-containerd-999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593.scope - libcontainer container 999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593. Mar 7 01:16:15.299456 containerd[1963]: time="2026-03-07T01:16:15.299403013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599d495554-x5ph2,Uid:21128cc0-1755-451f-8f5f-5abe3d3a011c,Namespace:calico-system,Attempt:0,} returns sandbox id \"56d2e54d52d2cbb18b300014ad101d415bcbd08ad35a8d9d37ffc0d6ceb36820\"" Mar 7 01:16:15.302870 containerd[1963]: time="2026-03-07T01:16:15.302821204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:16:15.347753 containerd[1963]: time="2026-03-07T01:16:15.347700958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mnlx6,Uid:a058e96a-678d-4e20-9459-d0cdd0a606fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\"" Mar 7 01:16:16.481224 kubelet[3191]: E0307 01:16:16.480543 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:16.846178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783707591.mount: Deactivated successfully. Mar 7 01:16:18.454108 containerd[1963]: time="2026-03-07T01:16:18.454054498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:18.455874 containerd[1963]: time="2026-03-07T01:16:18.455803199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:16:18.458115 containerd[1963]: time="2026-03-07T01:16:18.458052830Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:18.468879 containerd[1963]: time="2026-03-07T01:16:18.468795917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:18.470436 containerd[1963]: time="2026-03-07T01:16:18.469705929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.166839454s" Mar 7 01:16:18.470436 containerd[1963]: time="2026-03-07T01:16:18.469756173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:16:18.471178 containerd[1963]: time="2026-03-07T01:16:18.471142391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:16:18.481010 kubelet[3191]: E0307 01:16:18.479951 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:18.499780 containerd[1963]: time="2026-03-07T01:16:18.499723382Z" level=info msg="CreateContainer within sandbox \"56d2e54d52d2cbb18b300014ad101d415bcbd08ad35a8d9d37ffc0d6ceb36820\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:16:18.537596 containerd[1963]: time="2026-03-07T01:16:18.537548441Z" level=info msg="CreateContainer within sandbox \"56d2e54d52d2cbb18b300014ad101d415bcbd08ad35a8d9d37ffc0d6ceb36820\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"48074d6bc51b68dc6b0a671e9193aa935fa02e47ceb97f1e75878a99aa34e042\"" Mar 7 01:16:18.538434 containerd[1963]: time="2026-03-07T01:16:18.538396864Z" level=info msg="StartContainer for \"48074d6bc51b68dc6b0a671e9193aa935fa02e47ceb97f1e75878a99aa34e042\"" Mar 7 01:16:18.605279 systemd[1]: Started cri-containerd-48074d6bc51b68dc6b0a671e9193aa935fa02e47ceb97f1e75878a99aa34e042.scope - libcontainer container 48074d6bc51b68dc6b0a671e9193aa935fa02e47ceb97f1e75878a99aa34e042. Mar 7 01:16:18.661008 containerd[1963]: time="2026-03-07T01:16:18.660946120Z" level=info msg="StartContainer for \"48074d6bc51b68dc6b0a671e9193aa935fa02e47ceb97f1e75878a99aa34e042\" returns successfully" Mar 7 01:16:19.646032 kubelet[3191]: E0307 01:16:19.645745 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.646032 kubelet[3191]: W0307 01:16:19.645808 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.646032 kubelet[3191]: E0307 01:16:19.645842 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.646780 kubelet[3191]: E0307 01:16:19.646755 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.646780 kubelet[3191]: W0307 01:16:19.646776 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.646969 kubelet[3191]: E0307 01:16:19.646794 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.647107 kubelet[3191]: E0307 01:16:19.647090 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.647196 kubelet[3191]: W0307 01:16:19.647107 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.647196 kubelet[3191]: E0307 01:16:19.647121 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.647617 kubelet[3191]: E0307 01:16:19.647472 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.647617 kubelet[3191]: W0307 01:16:19.647495 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.647617 kubelet[3191]: E0307 01:16:19.647506 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.647864 kubelet[3191]: E0307 01:16:19.647843 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.647957 kubelet[3191]: W0307 01:16:19.647873 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.647957 kubelet[3191]: E0307 01:16:19.647887 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.648181 kubelet[3191]: E0307 01:16:19.648165 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.648234 kubelet[3191]: W0307 01:16:19.648182 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.648234 kubelet[3191]: E0307 01:16:19.648197 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.648488 kubelet[3191]: E0307 01:16:19.648468 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.648488 kubelet[3191]: W0307 01:16:19.648486 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.648699 kubelet[3191]: E0307 01:16:19.648500 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.649039 kubelet[3191]: E0307 01:16:19.649017 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.649194 kubelet[3191]: W0307 01:16:19.649035 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.649194 kubelet[3191]: E0307 01:16:19.649066 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.649340 kubelet[3191]: E0307 01:16:19.649316 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.649340 kubelet[3191]: W0307 01:16:19.649330 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.650159 kubelet[3191]: E0307 01:16:19.649343 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.650159 kubelet[3191]: E0307 01:16:19.649571 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.650159 kubelet[3191]: W0307 01:16:19.649585 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.650159 kubelet[3191]: E0307 01:16:19.649599 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.650159 kubelet[3191]: E0307 01:16:19.649862 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.650159 kubelet[3191]: W0307 01:16:19.649869 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.650159 kubelet[3191]: E0307 01:16:19.649878 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.650548 kubelet[3191]: E0307 01:16:19.650511 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.650548 kubelet[3191]: W0307 01:16:19.650543 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.650699 kubelet[3191]: E0307 01:16:19.650558 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.650838 kubelet[3191]: E0307 01:16:19.650820 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.650838 kubelet[3191]: W0307 01:16:19.650836 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.650951 kubelet[3191]: E0307 01:16:19.650848 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.651118 kubelet[3191]: E0307 01:16:19.651098 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.651180 kubelet[3191]: W0307 01:16:19.651117 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.651180 kubelet[3191]: E0307 01:16:19.651130 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.651369 kubelet[3191]: E0307 01:16:19.651351 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.651369 kubelet[3191]: W0307 01:16:19.651369 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.651478 kubelet[3191]: E0307 01:16:19.651380 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.709192 kubelet[3191]: E0307 01:16:19.709151 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.709192 kubelet[3191]: W0307 01:16:19.709187 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.709466 kubelet[3191]: E0307 01:16:19.709211 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.709600 kubelet[3191]: E0307 01:16:19.709527 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.709600 kubelet[3191]: W0307 01:16:19.709540 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.709600 kubelet[3191]: E0307 01:16:19.709578 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.710074 kubelet[3191]: E0307 01:16:19.709827 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.710074 kubelet[3191]: W0307 01:16:19.709837 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.710074 kubelet[3191]: E0307 01:16:19.709851 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.710318 kubelet[3191]: E0307 01:16:19.710286 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.710542 kubelet[3191]: W0307 01:16:19.710378 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.710542 kubelet[3191]: E0307 01:16:19.710411 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.710762 kubelet[3191]: E0307 01:16:19.710738 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.710845 kubelet[3191]: W0307 01:16:19.710762 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.710845 kubelet[3191]: E0307 01:16:19.710778 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.711140 kubelet[3191]: E0307 01:16:19.711119 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.711140 kubelet[3191]: W0307 01:16:19.711138 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.711301 kubelet[3191]: E0307 01:16:19.711152 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.711483 kubelet[3191]: E0307 01:16:19.711466 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.711483 kubelet[3191]: W0307 01:16:19.711483 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.711662 kubelet[3191]: E0307 01:16:19.711497 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.711749 kubelet[3191]: E0307 01:16:19.711732 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.711749 kubelet[3191]: W0307 01:16:19.711742 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.711905 kubelet[3191]: E0307 01:16:19.711754 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.712084 kubelet[3191]: E0307 01:16:19.712059 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.712084 kubelet[3191]: W0307 01:16:19.712077 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.712241 kubelet[3191]: E0307 01:16:19.712091 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.712717 kubelet[3191]: E0307 01:16:19.712523 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.712717 kubelet[3191]: W0307 01:16:19.712708 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.712841 kubelet[3191]: E0307 01:16:19.712727 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.713086 kubelet[3191]: E0307 01:16:19.713063 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.713086 kubelet[3191]: W0307 01:16:19.713079 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.713287 kubelet[3191]: E0307 01:16:19.713094 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.713367 kubelet[3191]: E0307 01:16:19.713313 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.713367 kubelet[3191]: W0307 01:16:19.713324 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.713367 kubelet[3191]: E0307 01:16:19.713337 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.713570 kubelet[3191]: E0307 01:16:19.713546 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.713570 kubelet[3191]: W0307 01:16:19.713555 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.713570 kubelet[3191]: E0307 01:16:19.713567 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.713819 kubelet[3191]: E0307 01:16:19.713805 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.713899 kubelet[3191]: W0307 01:16:19.713819 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.713899 kubelet[3191]: E0307 01:16:19.713832 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.714311 kubelet[3191]: E0307 01:16:19.714292 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.714311 kubelet[3191]: W0307 01:16:19.714310 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.714456 kubelet[3191]: E0307 01:16:19.714323 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.714641 kubelet[3191]: E0307 01:16:19.714621 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.714641 kubelet[3191]: W0307 01:16:19.714638 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.714756 kubelet[3191]: E0307 01:16:19.714652 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.715339 kubelet[3191]: E0307 01:16:19.715320 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.715339 kubelet[3191]: W0307 01:16:19.715337 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.715456 kubelet[3191]: E0307 01:16:19.715350 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.715647 kubelet[3191]: E0307 01:16:19.715627 3191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.715647 kubelet[3191]: W0307 01:16:19.715645 3191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.715763 kubelet[3191]: E0307 01:16:19.715658 3191 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.948949 containerd[1963]: time="2026-03-07T01:16:19.948818316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:19.953022 containerd[1963]: time="2026-03-07T01:16:19.951654608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:16:19.953022 containerd[1963]: time="2026-03-07T01:16:19.951770564Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:19.954772 containerd[1963]: time="2026-03-07T01:16:19.954730055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:19.956075 containerd[1963]: time="2026-03-07T01:16:19.956033971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.484847272s" Mar 7 01:16:19.956205 containerd[1963]: time="2026-03-07T01:16:19.956184537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:16:19.962564 containerd[1963]: time="2026-03-07T01:16:19.962495980Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:16:20.029557 containerd[1963]: time="2026-03-07T01:16:20.029498555Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86\"" Mar 7 01:16:20.032526 containerd[1963]: time="2026-03-07T01:16:20.030434129Z" level=info msg="StartContainer for \"2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86\"" Mar 7 01:16:20.078652 systemd[1]: run-containerd-runc-k8s.io-2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86-runc.rk1EHw.mount: Deactivated successfully. Mar 7 01:16:20.087293 systemd[1]: Started cri-containerd-2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86.scope - libcontainer container 2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86. Mar 7 01:16:20.126476 containerd[1963]: time="2026-03-07T01:16:20.126429530Z" level=info msg="StartContainer for \"2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86\" returns successfully" Mar 7 01:16:20.141910 systemd[1]: cri-containerd-2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86.scope: Deactivated successfully. Mar 7 01:16:20.245339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86-rootfs.mount: Deactivated successfully. Mar 7 01:16:20.351753 containerd[1963]: time="2026-03-07T01:16:20.326933407Z" level=info msg="shim disconnected" id=2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86 namespace=k8s.io Mar 7 01:16:20.351753 containerd[1963]: time="2026-03-07T01:16:20.351749691Z" level=warning msg="cleaning up after shim disconnected" id=2c112ef9eb0244bf69c3060947e21fe056da0337d1c8a596ec2102fc6a76dc86 namespace=k8s.io Mar 7 01:16:20.352101 containerd[1963]: time="2026-03-07T01:16:20.351775531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:20.367701 containerd[1963]: time="2026-03-07T01:16:20.367646939Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:16:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:16:20.481136 kubelet[3191]: E0307 01:16:20.478268 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:20.632910 kubelet[3191]: I0307 01:16:20.632865 3191 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:16:20.635404 containerd[1963]: time="2026-03-07T01:16:20.635274566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:16:20.656677 kubelet[3191]: I0307 01:16:20.656449 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599d495554-x5ph2" podStartSLOduration=3.487201213 podStartE2EDuration="6.65642741s" podCreationTimestamp="2026-03-07 01:16:14 +0000 UTC" firstStartedPulling="2026-03-07 01:16:15.301540286 +0000 UTC m=+25.002606613" lastFinishedPulling="2026-03-07 01:16:18.470766483 +0000 UTC m=+28.171832810" observedRunningTime="2026-03-07 01:16:19.623357231 +0000 UTC m=+29.324423579" watchObservedRunningTime="2026-03-07 01:16:20.65642741 +0000 UTC m=+30.357493763" Mar 7 01:16:22.480355 kubelet[3191]: E0307 01:16:22.478723 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:24.479315 kubelet[3191]: E0307 01:16:24.477851 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:26.479510 kubelet[3191]: E0307 01:16:26.477936 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:28.479067 kubelet[3191]: E0307 01:16:28.478000 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:30.481253 kubelet[3191]: E0307 01:16:30.480448 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:30.546326 kubelet[3191]: I0307 01:16:30.545705 3191 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:16:32.479603 kubelet[3191]: E0307 01:16:32.479519 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:33.194750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194823560.mount: Deactivated successfully. Mar 7 01:16:33.267029 containerd[1963]: time="2026-03-07T01:16:33.258517878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.287535 containerd[1963]: time="2026-03-07T01:16:33.261679855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:16:33.290718 containerd[1963]: time="2026-03-07T01:16:33.290666864Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.294987 containerd[1963]: time="2026-03-07T01:16:33.294916941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.296037 containerd[1963]: time="2026-03-07T01:16:33.295972005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.660650114s" Mar 7 01:16:33.296217 containerd[1963]: time="2026-03-07T01:16:33.296192077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:16:33.306439 containerd[1963]: time="2026-03-07T01:16:33.306310815Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:16:33.420486 containerd[1963]: time="2026-03-07T01:16:33.420432877Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2\"" Mar 7 01:16:33.422053 containerd[1963]: time="2026-03-07T01:16:33.421602572Z" level=info msg="StartContainer for \"9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2\"" Mar 7 01:16:33.590861 systemd[1]: run-containerd-runc-k8s.io-9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2-runc.l53WSQ.mount: Deactivated successfully. Mar 7 01:16:33.601819 systemd[1]: Started cri-containerd-9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2.scope - libcontainer container 9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2. Mar 7 01:16:33.665442 containerd[1963]: time="2026-03-07T01:16:33.665373718Z" level=info msg="StartContainer for \"9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2\" returns successfully" Mar 7 01:16:33.748110 systemd[1]: cri-containerd-9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2.scope: Deactivated successfully. Mar 7 01:16:33.796222 containerd[1963]: time="2026-03-07T01:16:33.796137793Z" level=info msg="shim disconnected" id=9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2 namespace=k8s.io Mar 7 01:16:33.796222 containerd[1963]: time="2026-03-07T01:16:33.796218266Z" level=warning msg="cleaning up after shim disconnected" id=9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2 namespace=k8s.io Mar 7 01:16:33.796222 containerd[1963]: time="2026-03-07T01:16:33.796230956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:33.813752 containerd[1963]: time="2026-03-07T01:16:33.813682317Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:16:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:16:34.195814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f089b99450b9e394a2a499568288b76ab6ae0f5362916a8c8079d64b7448bd2-rootfs.mount: Deactivated successfully. Mar 7 01:16:34.477960 kubelet[3191]: E0307 01:16:34.477314 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:34.688175 containerd[1963]: time="2026-03-07T01:16:34.688094000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:16:36.477437 kubelet[3191]: E0307 01:16:36.477388 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:38.478950 kubelet[3191]: E0307 01:16:38.478894 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:38.586504 containerd[1963]: time="2026-03-07T01:16:38.586446294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:38.588639 containerd[1963]: time="2026-03-07T01:16:38.588579535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:16:38.591059 containerd[1963]: time="2026-03-07T01:16:38.590968436Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:38.596126 containerd[1963]: time="2026-03-07T01:16:38.594992434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:38.596126 containerd[1963]: time="2026-03-07T01:16:38.595920922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.907778708s" Mar 7 01:16:38.596126 containerd[1963]: time="2026-03-07T01:16:38.595960938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:16:38.604581 containerd[1963]: time="2026-03-07T01:16:38.604528919Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:16:38.632703 containerd[1963]: time="2026-03-07T01:16:38.632651506Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b\"" Mar 7 01:16:38.633727 containerd[1963]: time="2026-03-07T01:16:38.633662970Z" level=info msg="StartContainer for \"12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b\"" Mar 7 01:16:38.681328 systemd[1]: Started cri-containerd-12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b.scope - libcontainer container 12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b. Mar 7 01:16:38.728404 containerd[1963]: time="2026-03-07T01:16:38.728346079Z" level=info msg="StartContainer for \"12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b\" returns successfully" Mar 7 01:16:39.585029 systemd[1]: cri-containerd-12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b.scope: Deactivated successfully. Mar 7 01:16:39.630749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b-rootfs.mount: Deactivated successfully. Mar 7 01:16:39.653946 kubelet[3191]: I0307 01:16:39.652306 3191 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:16:39.705465 containerd[1963]: time="2026-03-07T01:16:39.705161399Z" level=info msg="shim disconnected" id=12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b namespace=k8s.io Mar 7 01:16:39.705465 containerd[1963]: time="2026-03-07T01:16:39.705234202Z" level=warning msg="cleaning up after shim disconnected" id=12a112733315755219df8be7e59973d34def43250f7abc4c57f702cce01ba28b namespace=k8s.io Mar 7 01:16:39.705465 containerd[1963]: time="2026-03-07T01:16:39.705252098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:39.885484 systemd[1]: Created slice kubepods-besteffort-podf8ecbf7f_4830_437f_b292_9cd1d51ae57e.slice - libcontainer container kubepods-besteffort-podf8ecbf7f_4830_437f_b292_9cd1d51ae57e.slice. Mar 7 01:16:39.918091 kubelet[3191]: I0307 01:16:39.918032 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e240c855-c8b1-420d-93db-8b8e45e00b2c-config-volume\") pod \"coredns-674b8bbfcf-lld45\" (UID: \"e240c855-c8b1-420d-93db-8b8e45e00b2c\") " pod="kube-system/coredns-674b8bbfcf-lld45" Mar 7 01:16:39.918301 kubelet[3191]: I0307 01:16:39.918127 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkwv\" (UniqueName: \"kubernetes.io/projected/e240c855-c8b1-420d-93db-8b8e45e00b2c-kube-api-access-gjkwv\") pod \"coredns-674b8bbfcf-lld45\" (UID: \"e240c855-c8b1-420d-93db-8b8e45e00b2c\") " pod="kube-system/coredns-674b8bbfcf-lld45" Mar 7 01:16:39.918301 kubelet[3191]: I0307 01:16:39.918163 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f74be19c-3e5a-4ec1-971f-5534f5ca8d72-config-volume\") pod \"coredns-674b8bbfcf-hx6bb\" (UID: \"f74be19c-3e5a-4ec1-971f-5534f5ca8d72\") " pod="kube-system/coredns-674b8bbfcf-hx6bb" Mar 7 01:16:39.918301 kubelet[3191]: I0307 01:16:39.918187 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lf5n\" (UniqueName: \"kubernetes.io/projected/f74be19c-3e5a-4ec1-971f-5534f5ca8d72-kube-api-access-8lf5n\") pod \"coredns-674b8bbfcf-hx6bb\" (UID: \"f74be19c-3e5a-4ec1-971f-5534f5ca8d72\") " pod="kube-system/coredns-674b8bbfcf-hx6bb" Mar 7 01:16:39.918301 kubelet[3191]: I0307 01:16:39.918222 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c384657e-2841-43d4-89d4-ff693e5014b6-calico-apiserver-certs\") pod \"calico-apiserver-64c7867f4-fd5dm\" (UID: \"c384657e-2841-43d4-89d4-ff693e5014b6\") " pod="calico-system/calico-apiserver-64c7867f4-fd5dm" Mar 7 01:16:39.918301 kubelet[3191]: I0307 01:16:39.918246 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvl6\" (UniqueName: \"kubernetes.io/projected/c384657e-2841-43d4-89d4-ff693e5014b6-kube-api-access-wxvl6\") pod \"calico-apiserver-64c7867f4-fd5dm\" (UID: \"c384657e-2841-43d4-89d4-ff693e5014b6\") " pod="calico-system/calico-apiserver-64c7867f4-fd5dm" Mar 7 01:16:39.918541 kubelet[3191]: I0307 01:16:39.918304 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lj7\" (UniqueName: \"kubernetes.io/projected/f8ecbf7f-4830-437f-b292-9cd1d51ae57e-kube-api-access-g9lj7\") pod \"calico-kube-controllers-b98f9b6fc-npvst\" (UID: \"f8ecbf7f-4830-437f-b292-9cd1d51ae57e\") " pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" Mar 7 01:16:39.918541 kubelet[3191]: I0307 01:16:39.918335 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ecbf7f-4830-437f-b292-9cd1d51ae57e-tigera-ca-bundle\") pod \"calico-kube-controllers-b98f9b6fc-npvst\" (UID: \"f8ecbf7f-4830-437f-b292-9cd1d51ae57e\") " pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" Mar 7 01:16:39.929489 systemd[1]: Created slice kubepods-besteffort-pod609357eb_6103_440d_a377_358699c55caf.slice - libcontainer container kubepods-besteffort-pod609357eb_6103_440d_a377_358699c55caf.slice. Mar 7 01:16:39.940116 systemd[1]: Created slice kubepods-besteffort-podc384657e_2841_43d4_89d4_ff693e5014b6.slice - libcontainer container kubepods-besteffort-podc384657e_2841_43d4_89d4_ff693e5014b6.slice. Mar 7 01:16:39.950209 systemd[1]: Created slice kubepods-besteffort-podd317c1e8_3726_4466_b652_a1bd0a0fc939.slice - libcontainer container kubepods-besteffort-podd317c1e8_3726_4466_b652_a1bd0a0fc939.slice. Mar 7 01:16:39.961701 systemd[1]: Created slice kubepods-burstable-podf74be19c_3e5a_4ec1_971f_5534f5ca8d72.slice - libcontainer container kubepods-burstable-podf74be19c_3e5a_4ec1_971f_5534f5ca8d72.slice. Mar 7 01:16:39.975180 systemd[1]: Created slice kubepods-besteffort-podc16a83f1_f0b9_4cb8_9fd4_5f908f7fd2c5.slice - libcontainer container kubepods-besteffort-podc16a83f1_f0b9_4cb8_9fd4_5f908f7fd2c5.slice. Mar 7 01:16:39.984369 systemd[1]: Created slice kubepods-burstable-pode240c855_c8b1_420d_93db_8b8e45e00b2c.slice - libcontainer container kubepods-burstable-pode240c855_c8b1_420d_93db_8b8e45e00b2c.slice. Mar 7 01:16:40.019008 kubelet[3191]: I0307 01:16:40.018649 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-whisker-ca-bundle\") pod \"whisker-7547cf959d-mfpmd\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " pod="calico-system/whisker-7547cf959d-mfpmd" Mar 7 01:16:40.019008 kubelet[3191]: I0307 01:16:40.018701 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d317c1e8-3726-4466-b652-a1bd0a0fc939-goldmane-key-pair\") pod \"goldmane-5b85766d88-9r4d8\" (UID: \"d317c1e8-3726-4466-b652-a1bd0a0fc939\") " pod="calico-system/goldmane-5b85766d88-9r4d8" Mar 7 01:16:40.019008 kubelet[3191]: I0307 01:16:40.018797 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/609357eb-6103-440d-a377-358699c55caf-whisker-backend-key-pair\") pod \"whisker-7547cf959d-mfpmd\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " pod="calico-system/whisker-7547cf959d-mfpmd" Mar 7 01:16:40.019008 kubelet[3191]: I0307 01:16:40.018843 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrxb\" (UniqueName: \"kubernetes.io/projected/d317c1e8-3726-4466-b652-a1bd0a0fc939-kube-api-access-lzrxb\") pod \"goldmane-5b85766d88-9r4d8\" (UID: \"d317c1e8-3726-4466-b652-a1bd0a0fc939\") " pod="calico-system/goldmane-5b85766d88-9r4d8" Mar 7 01:16:40.019008 kubelet[3191]: I0307 01:16:40.018870 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dd5f\" (UniqueName: \"kubernetes.io/projected/c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5-kube-api-access-6dd5f\") pod \"calico-apiserver-64c7867f4-xrvts\" (UID: \"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5\") " pod="calico-system/calico-apiserver-64c7867f4-xrvts" Mar 7 01:16:40.019363 kubelet[3191]: I0307 01:16:40.018892 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d317c1e8-3726-4466-b652-a1bd0a0fc939-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-9r4d8\" (UID: \"d317c1e8-3726-4466-b652-a1bd0a0fc939\") " pod="calico-system/goldmane-5b85766d88-9r4d8" Mar 7 01:16:40.019363 kubelet[3191]: I0307 01:16:40.018918 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5-calico-apiserver-certs\") pod \"calico-apiserver-64c7867f4-xrvts\" (UID: \"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5\") " pod="calico-system/calico-apiserver-64c7867f4-xrvts" Mar 7 01:16:40.020997 kubelet[3191]: I0307 01:16:40.018967 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xrm\" (UniqueName: \"kubernetes.io/projected/609357eb-6103-440d-a377-358699c55caf-kube-api-access-62xrm\") pod \"whisker-7547cf959d-mfpmd\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " pod="calico-system/whisker-7547cf959d-mfpmd" Mar 7 01:16:40.020997 kubelet[3191]: I0307 01:16:40.020478 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d317c1e8-3726-4466-b652-a1bd0a0fc939-config\") pod \"goldmane-5b85766d88-9r4d8\" (UID: \"d317c1e8-3726-4466-b652-a1bd0a0fc939\") " pod="calico-system/goldmane-5b85766d88-9r4d8" Mar 7 01:16:40.020997 kubelet[3191]: I0307 01:16:40.020558 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-nginx-config\") pod \"whisker-7547cf959d-mfpmd\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " pod="calico-system/whisker-7547cf959d-mfpmd" Mar 7 01:16:40.256097 containerd[1963]: time="2026-03-07T01:16:40.255848473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b98f9b6fc-npvst,Uid:f8ecbf7f-4830-437f-b292-9cd1d51ae57e,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:40.261362 containerd[1963]: time="2026-03-07T01:16:40.257436118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-fd5dm,Uid:c384657e-2841-43d4-89d4-ff693e5014b6,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:40.286033 containerd[1963]: time="2026-03-07T01:16:40.257439628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7547cf959d-mfpmd,Uid:609357eb-6103-440d-a377-358699c55caf,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:40.286033 containerd[1963]: time="2026-03-07T01:16:40.285170795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-xrvts,Uid:c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:40.293287 containerd[1963]: time="2026-03-07T01:16:40.293233212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lld45,Uid:e240c855-c8b1-420d-93db-8b8e45e00b2c,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:40.294583 containerd[1963]: time="2026-03-07T01:16:40.294509453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9r4d8,Uid:d317c1e8-3726-4466-b652-a1bd0a0fc939,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:40.294949 containerd[1963]: time="2026-03-07T01:16:40.294891473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hx6bb,Uid:f74be19c-3e5a-4ec1-971f-5534f5ca8d72,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:40.496070 systemd[1]: Created slice kubepods-besteffort-pod9aed3645_1ca9_4273_9dbb_5a5fa746e5c3.slice - libcontainer container kubepods-besteffort-pod9aed3645_1ca9_4273_9dbb_5a5fa746e5c3.slice. Mar 7 01:16:40.510696 containerd[1963]: time="2026-03-07T01:16:40.510565259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kdrcv,Uid:9aed3645-1ca9-4273-9dbb-5a5fa746e5c3,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:40.879489 containerd[1963]: time="2026-03-07T01:16:40.879261407Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:16:40.962635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962176985.mount: Deactivated successfully. Mar 7 01:16:40.974207 containerd[1963]: time="2026-03-07T01:16:40.974148610Z" level=info msg="CreateContainer within sandbox \"999791f259ce18d4a5bb611f3f8d34130cae538eb17b55cd6762e190a8bcb593\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd\"" Mar 7 01:16:40.975970 containerd[1963]: time="2026-03-07T01:16:40.975864253Z" level=info msg="StartContainer for \"a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd\"" Mar 7 01:16:41.119222 systemd[1]: Started cri-containerd-a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd.scope - libcontainer container a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd. Mar 7 01:16:41.187662 containerd[1963]: time="2026-03-07T01:16:41.187541327Z" level=error msg="Failed to destroy network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.188501 containerd[1963]: time="2026-03-07T01:16:41.188219094Z" level=error msg="encountered an error cleaning up failed sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.188501 containerd[1963]: time="2026-03-07T01:16:41.188288142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b98f9b6fc-npvst,Uid:f8ecbf7f-4830-437f-b292-9cd1d51ae57e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.193832 containerd[1963]: time="2026-03-07T01:16:41.193647783Z" level=error msg="Failed to destroy network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.195708 containerd[1963]: time="2026-03-07T01:16:41.195648384Z" level=error msg="encountered an error cleaning up failed sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.195850 containerd[1963]: time="2026-03-07T01:16:41.195744459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hx6bb,Uid:f74be19c-3e5a-4ec1-971f-5534f5ca8d72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.196869 kubelet[3191]: E0307 01:16:41.196813 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.197335 kubelet[3191]: E0307 01:16:41.196972 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.198333 kubelet[3191]: E0307 01:16:41.198295 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hx6bb" Mar 7 01:16:41.198414 kubelet[3191]: E0307 01:16:41.198353 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hx6bb" Mar 7 01:16:41.198549 kubelet[3191]: E0307 01:16:41.198432 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hx6bb_kube-system(f74be19c-3e5a-4ec1-971f-5534f5ca8d72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hx6bb_kube-system(f74be19c-3e5a-4ec1-971f-5534f5ca8d72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hx6bb" podUID="f74be19c-3e5a-4ec1-971f-5534f5ca8d72" Mar 7 01:16:41.202072 kubelet[3191]: E0307 01:16:41.202023 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" Mar 7 01:16:41.202219 kubelet[3191]: E0307 01:16:41.202086 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" Mar 7 01:16:41.202219 kubelet[3191]: E0307 01:16:41.202149 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b98f9b6fc-npvst_calico-system(f8ecbf7f-4830-437f-b292-9cd1d51ae57e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b98f9b6fc-npvst_calico-system(f8ecbf7f-4830-437f-b292-9cd1d51ae57e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" podUID="f8ecbf7f-4830-437f-b292-9cd1d51ae57e" Mar 7 01:16:41.203804 containerd[1963]: time="2026-03-07T01:16:41.203487568Z" level=error msg="Failed to destroy network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.205775 containerd[1963]: time="2026-03-07T01:16:41.205577370Z" level=error msg="Failed to destroy network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.206999 containerd[1963]: time="2026-03-07T01:16:41.206552698Z" level=error msg="encountered an error cleaning up failed sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.207383 containerd[1963]: time="2026-03-07T01:16:41.207255624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9r4d8,Uid:d317c1e8-3726-4466-b652-a1bd0a0fc939,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.208025 kubelet[3191]: E0307 01:16:41.207500 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.208025 kubelet[3191]: E0307 01:16:41.207555 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-9r4d8" Mar 7 01:16:41.208025 kubelet[3191]: E0307 01:16:41.207582 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-9r4d8" Mar 7 01:16:41.208230 kubelet[3191]: E0307 01:16:41.207640 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-9r4d8_calico-system(d317c1e8-3726-4466-b652-a1bd0a0fc939)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-9r4d8_calico-system(d317c1e8-3726-4466-b652-a1bd0a0fc939)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-9r4d8" podUID="d317c1e8-3726-4466-b652-a1bd0a0fc939" Mar 7 01:16:41.212960 containerd[1963]: time="2026-03-07T01:16:41.212908502Z" level=error msg="encountered an error cleaning up failed sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.213344 containerd[1963]: time="2026-03-07T01:16:41.213089496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-fd5dm,Uid:c384657e-2841-43d4-89d4-ff693e5014b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.214019 kubelet[3191]: E0307 01:16:41.213635 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.214019 kubelet[3191]: E0307 01:16:41.213699 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64c7867f4-fd5dm" Mar 7 01:16:41.214019 kubelet[3191]: E0307 01:16:41.213728 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64c7867f4-fd5dm" Mar 7 01:16:41.214209 kubelet[3191]: E0307 01:16:41.213786 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c7867f4-fd5dm_calico-system(c384657e-2841-43d4-89d4-ff693e5014b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c7867f4-fd5dm_calico-system(c384657e-2841-43d4-89d4-ff693e5014b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64c7867f4-fd5dm" podUID="c384657e-2841-43d4-89d4-ff693e5014b6" Mar 7 01:16:41.235005 containerd[1963]: time="2026-03-07T01:16:41.234848906Z" level=info msg="StartContainer for \"a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd\" returns successfully" Mar 7 01:16:41.237882 containerd[1963]: time="2026-03-07T01:16:41.237763914Z" level=error msg="Failed to destroy network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.239102 containerd[1963]: time="2026-03-07T01:16:41.238886597Z" level=error msg="encountered an error cleaning up failed sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.239409 containerd[1963]: time="2026-03-07T01:16:41.239362971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-xrvts,Uid:c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.240258 kubelet[3191]: E0307 01:16:41.240029 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.240258 kubelet[3191]: E0307 01:16:41.240092 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64c7867f4-xrvts" Mar 7 01:16:41.240258 kubelet[3191]: E0307 01:16:41.240121 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64c7867f4-xrvts" Mar 7 01:16:41.241132 kubelet[3191]: E0307 01:16:41.240202 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c7867f4-xrvts_calico-system(c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c7867f4-xrvts_calico-system(c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64c7867f4-xrvts" podUID="c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5" Mar 7 01:16:41.243036 containerd[1963]: time="2026-03-07T01:16:41.242852541Z" level=error msg="Failed to destroy network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.243540 containerd[1963]: time="2026-03-07T01:16:41.243253431Z" level=error msg="encountered an error cleaning up failed sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.243540 containerd[1963]: time="2026-03-07T01:16:41.243326384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7547cf959d-mfpmd,Uid:609357eb-6103-440d-a377-358699c55caf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.245051 containerd[1963]: time="2026-03-07T01:16:41.243778454Z" level=error msg="Failed to destroy network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.245051 containerd[1963]: time="2026-03-07T01:16:41.244284100Z" level=error msg="encountered an error cleaning up failed sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.245051 containerd[1963]: time="2026-03-07T01:16:41.244359354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lld45,Uid:e240c855-c8b1-420d-93db-8b8e45e00b2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.245187 kubelet[3191]: E0307 01:16:41.243956 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.245187 kubelet[3191]: E0307 01:16:41.244040 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7547cf959d-mfpmd" Mar 7 01:16:41.245187 kubelet[3191]: E0307 01:16:41.244083 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7547cf959d-mfpmd" Mar 7 01:16:41.245358 kubelet[3191]: E0307 01:16:41.244196 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7547cf959d-mfpmd_calico-system(609357eb-6103-440d-a377-358699c55caf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7547cf959d-mfpmd_calico-system(609357eb-6103-440d-a377-358699c55caf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7547cf959d-mfpmd" podUID="609357eb-6103-440d-a377-358699c55caf" Mar 7 01:16:41.245358 kubelet[3191]: E0307 01:16:41.244611 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.245358 kubelet[3191]: E0307 01:16:41.244673 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lld45" Mar 7 01:16:41.245531 kubelet[3191]: E0307 01:16:41.244721 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lld45" Mar 7 01:16:41.245531 kubelet[3191]: E0307 01:16:41.244797 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lld45_kube-system(e240c855-c8b1-420d-93db-8b8e45e00b2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lld45_kube-system(e240c855-c8b1-420d-93db-8b8e45e00b2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lld45" podUID="e240c855-c8b1-420d-93db-8b8e45e00b2c" Mar 7 01:16:41.248653 containerd[1963]: time="2026-03-07T01:16:41.248603116Z" level=error msg="Failed to destroy network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.249061 containerd[1963]: time="2026-03-07T01:16:41.248964319Z" level=error msg="encountered an error cleaning up failed sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.249202 containerd[1963]: time="2026-03-07T01:16:41.249066206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kdrcv,Uid:9aed3645-1ca9-4273-9dbb-5a5fa746e5c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.250048 kubelet[3191]: E0307 01:16:41.249794 3191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:41.250048 kubelet[3191]: E0307 01:16:41.249851 3191 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:41.250048 kubelet[3191]: E0307 01:16:41.249900 3191 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kdrcv" Mar 7 01:16:41.250245 kubelet[3191]: E0307 01:16:41.249962 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kdrcv_calico-system(9aed3645-1ca9-4273-9dbb-5a5fa746e5c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kdrcv_calico-system(9aed3645-1ca9-4273-9dbb-5a5fa746e5c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:41.633093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2-shm.mount: Deactivated successfully. Mar 7 01:16:41.633304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff-shm.mount: Deactivated successfully. Mar 7 01:16:41.633406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f-shm.mount: Deactivated successfully. Mar 7 01:16:41.633498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec-shm.mount: Deactivated successfully. Mar 7 01:16:41.633584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc-shm.mount: Deactivated successfully. Mar 7 01:16:41.633674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e-shm.mount: Deactivated successfully. Mar 7 01:16:41.633805 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9-shm.mount: Deactivated successfully. Mar 7 01:16:41.633888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532-shm.mount: Deactivated successfully. Mar 7 01:16:41.769303 kubelet[3191]: I0307 01:16:41.768657 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:41.813709 kubelet[3191]: I0307 01:16:41.812184 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:16:41.830754 kubelet[3191]: I0307 01:16:41.829211 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mnlx6" podStartSLOduration=4.581302453 podStartE2EDuration="27.829187684s" podCreationTimestamp="2026-03-07 01:16:14 +0000 UTC" firstStartedPulling="2026-03-07 01:16:15.349594511 +0000 UTC m=+25.050660836" lastFinishedPulling="2026-03-07 01:16:38.597479734 +0000 UTC m=+48.298546067" observedRunningTime="2026-03-07 01:16:41.82806103 +0000 UTC m=+51.529127381" watchObservedRunningTime="2026-03-07 01:16:41.829187684 +0000 UTC m=+51.530254034" Mar 7 01:16:41.881935 containerd[1963]: time="2026-03-07T01:16:41.881222933Z" level=info msg="StopPodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\"" Mar 7 01:16:41.881935 containerd[1963]: time="2026-03-07T01:16:41.881488763Z" level=info msg="Ensure that sandbox c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2 in task-service has been cleanup successfully" Mar 7 01:16:41.883562 containerd[1963]: time="2026-03-07T01:16:41.883450949Z" level=info msg="StopPodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\"" Mar 7 01:16:41.884599 containerd[1963]: time="2026-03-07T01:16:41.884165582Z" level=info msg="Ensure that sandbox 38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff in task-service has been cleanup successfully" Mar 7 01:16:41.908304 kubelet[3191]: I0307 01:16:41.905877 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:16:41.910553 containerd[1963]: time="2026-03-07T01:16:41.909091366Z" level=info msg="StopPodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\"" Mar 7 01:16:41.910553 containerd[1963]: time="2026-03-07T01:16:41.909749676Z" level=info msg="Ensure that sandbox 3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532 in task-service has been cleanup successfully" Mar 7 01:16:41.921043 kubelet[3191]: I0307 01:16:41.918915 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:16:41.926924 containerd[1963]: time="2026-03-07T01:16:41.924761296Z" level=info msg="StopPodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\"" Mar 7 01:16:41.926924 containerd[1963]: time="2026-03-07T01:16:41.925008574Z" level=info msg="Ensure that sandbox edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f in task-service has been cleanup successfully" Mar 7 01:16:41.938832 kubelet[3191]: I0307 01:16:41.938798 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:16:41.943183 containerd[1963]: time="2026-03-07T01:16:41.943141232Z" level=info msg="StopPodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\"" Mar 7 01:16:41.943871 containerd[1963]: time="2026-03-07T01:16:41.943824117Z" level=info msg="Ensure that sandbox 00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9 in task-service has been cleanup successfully" Mar 7 01:16:41.973466 kubelet[3191]: I0307 01:16:41.973433 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:16:41.984449 containerd[1963]: time="2026-03-07T01:16:41.984399016Z" level=info msg="StopPodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\"" Mar 7 01:16:41.985948 containerd[1963]: time="2026-03-07T01:16:41.985897414Z" level=info msg="Ensure that sandbox 0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec in task-service has been cleanup successfully" Mar 7 01:16:41.999910 kubelet[3191]: I0307 01:16:41.999879 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:16:42.001001 containerd[1963]: time="2026-03-07T01:16:42.000951613Z" level=info msg="StopPodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\"" Mar 7 01:16:42.001212 containerd[1963]: time="2026-03-07T01:16:42.001180562Z" level=info msg="Ensure that sandbox 7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc in task-service has been cleanup successfully" Mar 7 01:16:42.002647 kubelet[3191]: I0307 01:16:42.002621 3191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:16:42.004267 containerd[1963]: time="2026-03-07T01:16:42.004223045Z" level=info msg="StopPodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\"" Mar 7 01:16:42.004861 containerd[1963]: time="2026-03-07T01:16:42.004826615Z" level=info msg="Ensure that sandbox dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e in task-service has been cleanup successfully" Mar 7 01:16:42.060101 containerd[1963]: time="2026-03-07T01:16:42.060024832Z" level=error msg="StopPodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" failed" error="failed to destroy network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.061333 kubelet[3191]: E0307 01:16:42.061261 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:16:42.061479 kubelet[3191]: E0307 01:16:42.061385 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2"} Mar 7 01:16:42.061531 kubelet[3191]: E0307 01:16:42.061491 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.061630 kubelet[3191]: E0307 01:16:42.061560 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kdrcv" podUID="9aed3645-1ca9-4273-9dbb-5a5fa746e5c3" Mar 7 01:16:42.073672 containerd[1963]: time="2026-03-07T01:16:42.073618615Z" level=error msg="StopPodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" failed" error="failed to destroy network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.074102 kubelet[3191]: E0307 01:16:42.074056 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:42.074301 kubelet[3191]: E0307 01:16:42.074278 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff"} Mar 7 01:16:42.074481 kubelet[3191]: E0307 01:16:42.074396 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"609357eb-6103-440d-a377-358699c55caf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.074481 kubelet[3191]: E0307 01:16:42.074448 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"609357eb-6103-440d-a377-358699c55caf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7547cf959d-mfpmd" podUID="609357eb-6103-440d-a377-358699c55caf" Mar 7 01:16:42.118765 systemd[1]: run-containerd-runc-k8s.io-a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd-runc.AZo85o.mount: Deactivated successfully. Mar 7 01:16:42.191890 containerd[1963]: time="2026-03-07T01:16:42.191742734Z" level=error msg="StopPodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" failed" error="failed to destroy network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.192430 kubelet[3191]: E0307 01:16:42.192386 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:16:42.192624 kubelet[3191]: E0307 01:16:42.192606 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f"} Mar 7 01:16:42.192725 kubelet[3191]: E0307 01:16:42.192710 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f74be19c-3e5a-4ec1-971f-5534f5ca8d72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.192887 kubelet[3191]: E0307 01:16:42.192861 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f74be19c-3e5a-4ec1-971f-5534f5ca8d72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hx6bb" podUID="f74be19c-3e5a-4ec1-971f-5534f5ca8d72" Mar 7 01:16:42.200954 containerd[1963]: time="2026-03-07T01:16:42.200901223Z" level=error msg="StopPodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" failed" error="failed to destroy network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.201456 kubelet[3191]: E0307 01:16:42.201409 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:16:42.202022 kubelet[3191]: E0307 01:16:42.201972 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec"} Mar 7 01:16:42.203257 kubelet[3191]: E0307 01:16:42.203162 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d317c1e8-3726-4466-b652-a1bd0a0fc939\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.203257 kubelet[3191]: E0307 01:16:42.203209 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d317c1e8-3726-4466-b652-a1bd0a0fc939\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-9r4d8" podUID="d317c1e8-3726-4466-b652-a1bd0a0fc939" Mar 7 01:16:42.218711 containerd[1963]: time="2026-03-07T01:16:42.218651248Z" level=error msg="StopPodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" failed" error="failed to destroy network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.219183 kubelet[3191]: E0307 01:16:42.219132 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:16:42.219358 kubelet[3191]: E0307 01:16:42.219337 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532"} Mar 7 01:16:42.219478 kubelet[3191]: E0307 01:16:42.219460 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e240c855-c8b1-420d-93db-8b8e45e00b2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.219675 kubelet[3191]: E0307 01:16:42.219648 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e240c855-c8b1-420d-93db-8b8e45e00b2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lld45" podUID="e240c855-c8b1-420d-93db-8b8e45e00b2c" Mar 7 01:16:42.247395 containerd[1963]: time="2026-03-07T01:16:42.246461584Z" level=error msg="StopPodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" failed" error="failed to destroy network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.247878 kubelet[3191]: E0307 01:16:42.247832 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:16:42.248232 kubelet[3191]: E0307 01:16:42.248205 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9"} Mar 7 01:16:42.248420 kubelet[3191]: E0307 01:16:42.248392 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c384657e-2841-43d4-89d4-ff693e5014b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.248861 kubelet[3191]: E0307 01:16:42.248759 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c384657e-2841-43d4-89d4-ff693e5014b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64c7867f4-fd5dm" podUID="c384657e-2841-43d4-89d4-ff693e5014b6" Mar 7 01:16:42.254341 containerd[1963]: time="2026-03-07T01:16:42.254255918Z" level=error msg="StopPodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" failed" error="failed to destroy network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.258326 containerd[1963]: time="2026-03-07T01:16:42.257814214Z" level=error msg="StopPodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" failed" error="failed to destroy network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:42.261924 kubelet[3191]: E0307 01:16:42.261621 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:16:42.261924 kubelet[3191]: E0307 01:16:42.261687 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc"} Mar 7 01:16:42.261924 kubelet[3191]: E0307 01:16:42.261729 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.261924 kubelet[3191]: E0307 01:16:42.261763 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64c7867f4-xrvts" podUID="c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5" Mar 7 01:16:42.262338 kubelet[3191]: E0307 01:16:42.258614 3191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:16:42.262338 kubelet[3191]: E0307 01:16:42.261839 3191 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e"} Mar 7 01:16:42.262338 kubelet[3191]: E0307 01:16:42.261868 3191 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8ecbf7f-4830-437f-b292-9cd1d51ae57e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:42.262338 kubelet[3191]: E0307 01:16:42.261896 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8ecbf7f-4830-437f-b292-9cd1d51ae57e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" podUID="f8ecbf7f-4830-437f-b292-9cd1d51ae57e" Mar 7 01:16:43.006578 containerd[1963]: time="2026-03-07T01:16:43.006220626Z" level=info msg="StopPodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\"" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.141 [INFO][4537] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.142 [INFO][4537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" iface="eth0" netns="/var/run/netns/cni-9a3938da-03a5-00a0-a686-5acf1da5b783" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.143 [INFO][4537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" iface="eth0" netns="/var/run/netns/cni-9a3938da-03a5-00a0-a686-5acf1da5b783" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.144 [INFO][4537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" iface="eth0" netns="/var/run/netns/cni-9a3938da-03a5-00a0-a686-5acf1da5b783" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.145 [INFO][4537] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.145 [INFO][4537] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.207 [INFO][4560] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.208 [INFO][4560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.208 [INFO][4560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.217 [WARNING][4560] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.217 [INFO][4560] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.219 [INFO][4560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:43.225486 containerd[1963]: 2026-03-07 01:16:43.223 [INFO][4537] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:43.229279 containerd[1963]: time="2026-03-07T01:16:43.225617672Z" level=info msg="TearDown network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" successfully" Mar 7 01:16:43.229279 containerd[1963]: time="2026-03-07T01:16:43.225648438Z" level=info msg="StopPodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" returns successfully" Mar 7 01:16:43.230355 systemd[1]: run-netns-cni\x2d9a3938da\x2d03a5\x2d00a0\x2da686\x2d5acf1da5b783.mount: Deactivated successfully. Mar 7 01:16:43.291148 kubelet[3191]: I0307 01:16:43.290856 3191 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-nginx-config\") pod \"609357eb-6103-440d-a377-358699c55caf\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " Mar 7 01:16:43.291148 kubelet[3191]: I0307 01:16:43.290938 3191 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/609357eb-6103-440d-a377-358699c55caf-whisker-backend-key-pair\") pod \"609357eb-6103-440d-a377-358699c55caf\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " Mar 7 01:16:43.293128 kubelet[3191]: I0307 01:16:43.292419 3191 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-whisker-ca-bundle\") pod \"609357eb-6103-440d-a377-358699c55caf\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " Mar 7 01:16:43.293128 kubelet[3191]: I0307 01:16:43.292494 3191 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xrm\" (UniqueName: \"kubernetes.io/projected/609357eb-6103-440d-a377-358699c55caf-kube-api-access-62xrm\") pod \"609357eb-6103-440d-a377-358699c55caf\" (UID: \"609357eb-6103-440d-a377-358699c55caf\") " Mar 7 01:16:43.308070 systemd[1]: var-lib-kubelet-pods-609357eb\x2d6103\x2d440d\x2da377\x2d358699c55caf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d62xrm.mount: Deactivated successfully. Mar 7 01:16:43.319274 systemd[1]: var-lib-kubelet-pods-609357eb\x2d6103\x2d440d\x2da377\x2d358699c55caf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:16:43.321206 kubelet[3191]: I0307 01:16:43.319083 3191 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609357eb-6103-440d-a377-358699c55caf-kube-api-access-62xrm" (OuterVolumeSpecName: "kube-api-access-62xrm") pod "609357eb-6103-440d-a377-358699c55caf" (UID: "609357eb-6103-440d-a377-358699c55caf"). InnerVolumeSpecName "kube-api-access-62xrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:16:43.321348 kubelet[3191]: I0307 01:16:43.321248 3191 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "609357eb-6103-440d-a377-358699c55caf" (UID: "609357eb-6103-440d-a377-358699c55caf"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:16:43.321721 kubelet[3191]: I0307 01:16:43.321684 3191 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "609357eb-6103-440d-a377-358699c55caf" (UID: "609357eb-6103-440d-a377-358699c55caf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:16:43.322570 kubelet[3191]: I0307 01:16:43.317783 3191 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609357eb-6103-440d-a377-358699c55caf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "609357eb-6103-440d-a377-358699c55caf" (UID: "609357eb-6103-440d-a377-358699c55caf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:16:43.392929 kubelet[3191]: I0307 01:16:43.392853 3191 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/609357eb-6103-440d-a377-358699c55caf-whisker-backend-key-pair\") on node \"ip-172-31-20-242\" DevicePath \"\"" Mar 7 01:16:43.392929 kubelet[3191]: I0307 01:16:43.392919 3191 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-whisker-ca-bundle\") on node \"ip-172-31-20-242\" DevicePath \"\"" Mar 7 01:16:43.392929 kubelet[3191]: I0307 01:16:43.392934 3191 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-62xrm\" (UniqueName: \"kubernetes.io/projected/609357eb-6103-440d-a377-358699c55caf-kube-api-access-62xrm\") on node \"ip-172-31-20-242\" DevicePath \"\"" Mar 7 01:16:43.392929 kubelet[3191]: I0307 01:16:43.392949 3191 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/609357eb-6103-440d-a377-358699c55caf-nginx-config\") on node \"ip-172-31-20-242\" DevicePath \"\"" Mar 7 01:16:44.035991 systemd[1]: Removed slice kubepods-besteffort-pod609357eb_6103_440d_a377_358699c55caf.slice - libcontainer container kubepods-besteffort-pod609357eb_6103_440d_a377_358699c55caf.slice. Mar 7 01:16:44.231843 systemd[1]: Created slice kubepods-besteffort-pod9f212b6b_c70e_4acf_b6ef_3a0173cffd57.slice - libcontainer container kubepods-besteffort-pod9f212b6b_c70e_4acf_b6ef_3a0173cffd57.slice. Mar 7 01:16:44.301353 kubelet[3191]: I0307 01:16:44.301213 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh42p\" (UniqueName: \"kubernetes.io/projected/9f212b6b-c70e-4acf-b6ef-3a0173cffd57-kube-api-access-rh42p\") pod \"whisker-55b5cdb6f5-nz5mr\" (UID: \"9f212b6b-c70e-4acf-b6ef-3a0173cffd57\") " pod="calico-system/whisker-55b5cdb6f5-nz5mr" Mar 7 01:16:44.301353 kubelet[3191]: I0307 01:16:44.301270 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9f212b6b-c70e-4acf-b6ef-3a0173cffd57-whisker-backend-key-pair\") pod \"whisker-55b5cdb6f5-nz5mr\" (UID: \"9f212b6b-c70e-4acf-b6ef-3a0173cffd57\") " pod="calico-system/whisker-55b5cdb6f5-nz5mr" Mar 7 01:16:44.301353 kubelet[3191]: I0307 01:16:44.301302 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f212b6b-c70e-4acf-b6ef-3a0173cffd57-whisker-ca-bundle\") pod \"whisker-55b5cdb6f5-nz5mr\" (UID: \"9f212b6b-c70e-4acf-b6ef-3a0173cffd57\") " pod="calico-system/whisker-55b5cdb6f5-nz5mr" Mar 7 01:16:44.301353 kubelet[3191]: I0307 01:16:44.301329 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9f212b6b-c70e-4acf-b6ef-3a0173cffd57-nginx-config\") pod \"whisker-55b5cdb6f5-nz5mr\" (UID: \"9f212b6b-c70e-4acf-b6ef-3a0173cffd57\") " pod="calico-system/whisker-55b5cdb6f5-nz5mr" Mar 7 01:16:44.480905 kubelet[3191]: I0307 01:16:44.480850 3191 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609357eb-6103-440d-a377-358699c55caf" path="/var/lib/kubelet/pods/609357eb-6103-440d-a377-358699c55caf/volumes" Mar 7 01:16:44.546580 containerd[1963]: time="2026-03-07T01:16:44.546519080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b5cdb6f5-nz5mr,Uid:9f212b6b-c70e-4acf-b6ef-3a0173cffd57,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:44.738005 kernel: calico-node[4646]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:16:44.740576 kernel: hrtimer: interrupt took 1012645 ns Mar 7 01:16:45.927022 (udev-worker)[4759]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:16:45.927536 systemd-networkd[1862]: vxlan.calico: Link UP Mar 7 01:16:45.927542 systemd-networkd[1862]: vxlan.calico: Gained carrier Mar 7 01:16:46.011464 (udev-worker)[4776]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:16:46.026096 (udev-worker)[4770]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:16:46.036307 systemd-networkd[1862]: cali35f8e709e57: Link UP Mar 7 01:16:46.039497 systemd-networkd[1862]: cali35f8e709e57: Gained carrier Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.638 [INFO][4710] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0 whisker-55b5cdb6f5- calico-system 9f212b6b-c70e-4acf-b6ef-3a0173cffd57 968 0 2026-03-07 01:16:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55b5cdb6f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-242 whisker-55b5cdb6f5-nz5mr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali35f8e709e57 [] [] }} ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.639 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.701 [INFO][4729] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" HandleID="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Workload="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.717 [INFO][4729] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" HandleID="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Workload="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277d90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-242", "pod":"whisker-55b5cdb6f5-nz5mr", "timestamp":"2026-03-07 01:16:44.701325272 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000115080)} Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.717 [INFO][4729] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.718 [INFO][4729] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.718 [INFO][4729] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.724 [INFO][4729] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.738 [INFO][4729] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.750 [INFO][4729] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.753 [INFO][4729] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.759 [INFO][4729] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:44.760 [INFO][4729] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:45.911 [INFO][4729] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47 Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:45.950 [INFO][4729] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:45.970 [INFO][4729] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.193/26] block=192.168.81.192/26 handle="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:45.970 [INFO][4729] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.193/26] handle="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" host="ip-172-31-20-242" Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:45.970 [INFO][4729] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:46.077723 containerd[1963]: 2026-03-07 01:16:45.970 [INFO][4729] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.193/26] IPv6=[] ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" HandleID="k8s-pod-network.b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Workload="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.079931 containerd[1963]: 2026-03-07 01:16:45.985 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0", GenerateName:"whisker-55b5cdb6f5-", Namespace:"calico-system", SelfLink:"", UID:"9f212b6b-c70e-4acf-b6ef-3a0173cffd57", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55b5cdb6f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"whisker-55b5cdb6f5-nz5mr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35f8e709e57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.079931 containerd[1963]: 2026-03-07 01:16:45.985 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.193/32] ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.079931 containerd[1963]: 2026-03-07 01:16:45.985 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35f8e709e57 ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.079931 containerd[1963]: 2026-03-07 01:16:46.040 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.079931 containerd[1963]: 2026-03-07 01:16:46.041 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0", GenerateName:"whisker-55b5cdb6f5-", Namespace:"calico-system", SelfLink:"", UID:"9f212b6b-c70e-4acf-b6ef-3a0173cffd57", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55b5cdb6f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47", Pod:"whisker-55b5cdb6f5-nz5mr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35f8e709e57", MAC:"22:16:f5:1a:c9:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.079931 containerd[1963]: 2026-03-07 01:16:46.068 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47" Namespace="calico-system" Pod="whisker-55b5cdb6f5-nz5mr" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--55b5cdb6f5--nz5mr-eth0" Mar 7 01:16:46.623926 containerd[1963]: time="2026-03-07T01:16:46.623453047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:46.623926 containerd[1963]: time="2026-03-07T01:16:46.623535863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:46.623926 containerd[1963]: time="2026-03-07T01:16:46.623570806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.625794 containerd[1963]: time="2026-03-07T01:16:46.625622531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.760531 systemd[1]: Started cri-containerd-b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47.scope - libcontainer container b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47. Mar 7 01:16:46.866056 containerd[1963]: time="2026-03-07T01:16:46.863598928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b5cdb6f5-nz5mr,Uid:9f212b6b-c70e-4acf-b6ef-3a0173cffd57,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47\"" Mar 7 01:16:46.933477 containerd[1963]: time="2026-03-07T01:16:46.933420181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:16:47.791170 systemd-networkd[1862]: cali35f8e709e57: Gained IPv6LL Mar 7 01:16:47.855593 systemd-networkd[1862]: vxlan.calico: Gained IPv6LL Mar 7 01:16:48.605721 systemd[1]: Started sshd@7-172.31.20.242:22-68.220.241.50:38516.service - OpenSSH per-connection server daemon (68.220.241.50:38516). Mar 7 01:16:48.659578 containerd[1963]: time="2026-03-07T01:16:48.659511190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:16:48.663728 containerd[1963]: time="2026-03-07T01:16:48.663238418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:48.672516 containerd[1963]: time="2026-03-07T01:16:48.672469773Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:48.673622 containerd[1963]: time="2026-03-07T01:16:48.673468721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.739974451s" Mar 7 01:16:48.673622 containerd[1963]: time="2026-03-07T01:16:48.673512167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:16:48.674255 containerd[1963]: time="2026-03-07T01:16:48.674154803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:48.682213 containerd[1963]: time="2026-03-07T01:16:48.682151822Z" level=info msg="CreateContainer within sandbox \"b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:16:48.725253 containerd[1963]: time="2026-03-07T01:16:48.725190514Z" level=info msg="CreateContainer within sandbox \"b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"77141b768cc758661982b52625b27ff3247579999eb6a42310720737ed0bd42c\"" Mar 7 01:16:48.726632 containerd[1963]: time="2026-03-07T01:16:48.726426458Z" level=info msg="StartContainer for \"77141b768cc758661982b52625b27ff3247579999eb6a42310720737ed0bd42c\"" Mar 7 01:16:48.783227 systemd[1]: Started cri-containerd-77141b768cc758661982b52625b27ff3247579999eb6a42310720737ed0bd42c.scope - libcontainer container 77141b768cc758661982b52625b27ff3247579999eb6a42310720737ed0bd42c. Mar 7 01:16:48.882679 containerd[1963]: time="2026-03-07T01:16:48.882635808Z" level=info msg="StartContainer for \"77141b768cc758661982b52625b27ff3247579999eb6a42310720737ed0bd42c\" returns successfully" Mar 7 01:16:48.886410 containerd[1963]: time="2026-03-07T01:16:48.886082032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:16:49.152906 sshd[4905]: Accepted publickey for core from 68.220.241.50 port 38516 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:49.156801 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:49.164297 systemd-logind[1957]: New session 8 of user core. Mar 7 01:16:49.171201 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:16:50.306860 sshd[4905]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:50.322152 systemd[1]: sshd@7-172.31.20.242:22-68.220.241.50:38516.service: Deactivated successfully. Mar 7 01:16:50.326662 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:16:50.328872 systemd-logind[1957]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:16:50.329871 ntpd[1950]: Listen normally on 7 vxlan.calico 192.168.81.192:123 Mar 7 01:16:50.331282 ntpd[1950]: 7 Mar 01:16:50 ntpd[1950]: Listen normally on 7 vxlan.calico 192.168.81.192:123 Mar 7 01:16:50.331282 ntpd[1950]: 7 Mar 01:16:50 ntpd[1950]: Listen normally on 8 vxlan.calico [fe80::64a5:f7ff:fe3a:1c55%4]:123 Mar 7 01:16:50.331282 ntpd[1950]: 7 Mar 01:16:50 ntpd[1950]: Listen normally on 9 cali35f8e709e57 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 01:16:50.329957 ntpd[1950]: Listen normally on 8 vxlan.calico [fe80::64a5:f7ff:fe3a:1c55%4]:123 Mar 7 01:16:50.330916 ntpd[1950]: Listen normally on 9 cali35f8e709e57 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 01:16:50.332657 systemd-logind[1957]: Removed session 8. Mar 7 01:16:50.750716 containerd[1963]: time="2026-03-07T01:16:50.749706290Z" level=info msg="StopPodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\"" Mar 7 01:16:51.001014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682113528.mount: Deactivated successfully. Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.893 [WARNING][4969] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.893 [INFO][4969] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.893 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" iface="eth0" netns="" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.893 [INFO][4969] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.893 [INFO][4969] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.991 [INFO][4976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.991 [INFO][4976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:50.991 [INFO][4976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:51.007 [WARNING][4976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:51.007 [INFO][4976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:51.013 [INFO][4976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:51.020810 containerd[1963]: 2026-03-07 01:16:51.017 [INFO][4969] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.021420 containerd[1963]: time="2026-03-07T01:16:51.021364129Z" level=info msg="TearDown network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" successfully" Mar 7 01:16:51.021420 containerd[1963]: time="2026-03-07T01:16:51.021418485Z" level=info msg="StopPodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" returns successfully" Mar 7 01:16:51.022212 containerd[1963]: time="2026-03-07T01:16:51.022172881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:51.023396 containerd[1963]: time="2026-03-07T01:16:51.023320085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:16:51.025039 containerd[1963]: time="2026-03-07T01:16:51.024833078Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:51.033828 containerd[1963]: time="2026-03-07T01:16:51.033758649Z" level=info msg="RemovePodSandbox for \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\"" Mar 7 01:16:51.033828 containerd[1963]: time="2026-03-07T01:16:51.033810518Z" level=info msg="Forcibly stopping sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\"" Mar 7 01:16:51.035046 containerd[1963]: time="2026-03-07T01:16:51.034937186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:51.036507 containerd[1963]: time="2026-03-07T01:16:51.036456989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.150333926s" Mar 7 01:16:51.036626 containerd[1963]: time="2026-03-07T01:16:51.036511265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:16:51.042899 containerd[1963]: time="2026-03-07T01:16:51.042859086Z" level=info msg="CreateContainer within sandbox \"b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:16:51.080564 containerd[1963]: time="2026-03-07T01:16:51.080473547Z" level=info msg="CreateContainer within sandbox \"b7c033e6cc9cbce87f03540d959dccafae756c40c5a4f76201272b1fe83e2c47\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"810655f293af0af7779879ffa70f471a8e9a293081a842850d5fde66a7accc73\"" Mar 7 01:16:51.083290 containerd[1963]: time="2026-03-07T01:16:51.083228479Z" level=info msg="StartContainer for \"810655f293af0af7779879ffa70f471a8e9a293081a842850d5fde66a7accc73\"" Mar 7 01:16:51.158246 systemd[1]: Started cri-containerd-810655f293af0af7779879ffa70f471a8e9a293081a842850d5fde66a7accc73.scope - libcontainer container 810655f293af0af7779879ffa70f471a8e9a293081a842850d5fde66a7accc73. Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.102 [WARNING][4994] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" WorkloadEndpoint="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.102 [INFO][4994] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.102 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" iface="eth0" netns="" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.102 [INFO][4994] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.102 [INFO][4994] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.174 [INFO][5004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.174 [INFO][5004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.174 [INFO][5004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.186 [WARNING][5004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.186 [INFO][5004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" HandleID="k8s-pod-network.38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Workload="ip--172--31--20--242-k8s-whisker--7547cf959d--mfpmd-eth0" Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.188 [INFO][5004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:51.197108 containerd[1963]: 2026-03-07 01:16:51.193 [INFO][4994] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff" Mar 7 01:16:51.197108 containerd[1963]: time="2026-03-07T01:16:51.196544096Z" level=info msg="TearDown network for sandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" successfully" Mar 7 01:16:51.211965 containerd[1963]: time="2026-03-07T01:16:51.211728410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:16:51.211965 containerd[1963]: time="2026-03-07T01:16:51.211854189Z" level=info msg="RemovePodSandbox \"38c75b6233e9fd4b1217b76a9c2fe432e23131442bb64865d96c14ebdeb38cff\" returns successfully" Mar 7 01:16:51.240627 containerd[1963]: time="2026-03-07T01:16:51.240569592Z" level=info msg="StartContainer for \"810655f293af0af7779879ffa70f471a8e9a293081a842850d5fde66a7accc73\" returns successfully" Mar 7 01:16:53.483714 containerd[1963]: time="2026-03-07T01:16:53.482946491Z" level=info msg="StopPodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\"" Mar 7 01:16:53.483714 containerd[1963]: time="2026-03-07T01:16:53.483276075Z" level=info msg="StopPodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\"" Mar 7 01:16:53.575416 kubelet[3191]: I0307 01:16:53.575318 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55b5cdb6f5-nz5mr" podStartSLOduration=5.428250549 podStartE2EDuration="9.573929441s" podCreationTimestamp="2026-03-07 01:16:44 +0000 UTC" firstStartedPulling="2026-03-07 01:16:46.892001896 +0000 UTC m=+56.593068244" lastFinishedPulling="2026-03-07 01:16:51.037680799 +0000 UTC m=+60.738747136" observedRunningTime="2026-03-07 01:16:52.160718979 +0000 UTC m=+61.861785326" watchObservedRunningTime="2026-03-07 01:16:53.573929441 +0000 UTC m=+63.274995790" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.573 [INFO][5082] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.574 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" iface="eth0" netns="/var/run/netns/cni-b3363d40-748a-5bf0-e050-240c907ed88a" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.575 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" iface="eth0" netns="/var/run/netns/cni-b3363d40-748a-5bf0-e050-240c907ed88a" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.576 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" iface="eth0" netns="/var/run/netns/cni-b3363d40-748a-5bf0-e050-240c907ed88a" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.576 [INFO][5082] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.576 [INFO][5082] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.619 [INFO][5095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.619 [INFO][5095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.619 [INFO][5095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.630 [WARNING][5095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.630 [INFO][5095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.632 [INFO][5095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:53.639297 containerd[1963]: 2026-03-07 01:16:53.635 [INFO][5082] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:16:53.640680 containerd[1963]: time="2026-03-07T01:16:53.640109089Z" level=info msg="TearDown network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" successfully" Mar 7 01:16:53.640680 containerd[1963]: time="2026-03-07T01:16:53.640144913Z" level=info msg="StopPodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" returns successfully" Mar 7 01:16:53.643832 containerd[1963]: time="2026-03-07T01:16:53.643279656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b98f9b6fc-npvst,Uid:f8ecbf7f-4830-437f-b292-9cd1d51ae57e,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:53.647930 systemd[1]: run-netns-cni\x2db3363d40\x2d748a\x2d5bf0\x2de050\x2d240c907ed88a.mount: Deactivated successfully. Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.580 [INFO][5083] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.580 [INFO][5083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" iface="eth0" netns="/var/run/netns/cni-aa9acf14-6199-ed52-ccaa-8cc4ba600827" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.582 [INFO][5083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" iface="eth0" netns="/var/run/netns/cni-aa9acf14-6199-ed52-ccaa-8cc4ba600827" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.583 [INFO][5083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" iface="eth0" netns="/var/run/netns/cni-aa9acf14-6199-ed52-ccaa-8cc4ba600827" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.583 [INFO][5083] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.583 [INFO][5083] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.627 [INFO][5100] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.627 [INFO][5100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.632 [INFO][5100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.645 [WARNING][5100] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.645 [INFO][5100] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.650 [INFO][5100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:53.654503 containerd[1963]: 2026-03-07 01:16:53.652 [INFO][5083] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:16:53.657193 containerd[1963]: time="2026-03-07T01:16:53.655619905Z" level=info msg="TearDown network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" successfully" Mar 7 01:16:53.657193 containerd[1963]: time="2026-03-07T01:16:53.655659497Z" level=info msg="StopPodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" returns successfully" Mar 7 01:16:53.659559 containerd[1963]: time="2026-03-07T01:16:53.658218748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9r4d8,Uid:d317c1e8-3726-4466-b652-a1bd0a0fc939,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:53.660897 systemd[1]: run-netns-cni\x2daa9acf14\x2d6199\x2ded52\x2dccaa\x2d8cc4ba600827.mount: Deactivated successfully. Mar 7 01:16:53.866781 systemd-networkd[1862]: cali3b4c16be479: Link UP Mar 7 01:16:53.867094 systemd-networkd[1862]: cali3b4c16be479: Gained carrier Mar 7 01:16:53.872684 (udev-worker)[5146]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.757 [INFO][5108] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0 calico-kube-controllers-b98f9b6fc- calico-system f8ecbf7f-4830-437f-b292-9cd1d51ae57e 1064 0 2026-03-07 01:16:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b98f9b6fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-242 calico-kube-controllers-b98f9b6fc-npvst eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3b4c16be479 [] [] }} ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.757 [INFO][5108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.801 [INFO][5132] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" HandleID="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.818 [INFO][5132] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" HandleID="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-242", "pod":"calico-kube-controllers-b98f9b6fc-npvst", "timestamp":"2026-03-07 01:16:53.80130295 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.818 [INFO][5132] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.818 [INFO][5132] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.819 [INFO][5132] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.821 [INFO][5132] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.829 [INFO][5132] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.835 [INFO][5132] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.837 [INFO][5132] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.840 [INFO][5132] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.840 [INFO][5132] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.842 [INFO][5132] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.848 [INFO][5132] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.859 [INFO][5132] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.194/26] block=192.168.81.192/26 handle="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.859 [INFO][5132] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.194/26] handle="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" host="ip-172-31-20-242" Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.859 [INFO][5132] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:53.893820 containerd[1963]: 2026-03-07 01:16:53.859 [INFO][5132] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.194/26] IPv6=[] ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" HandleID="k8s-pod-network.86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.894907 containerd[1963]: 2026-03-07 01:16:53.862 [INFO][5108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0", GenerateName:"calico-kube-controllers-b98f9b6fc-", Namespace:"calico-system", SelfLink:"", UID:"f8ecbf7f-4830-437f-b292-9cd1d51ae57e", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b98f9b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"calico-kube-controllers-b98f9b6fc-npvst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b4c16be479", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:53.894907 containerd[1963]: 2026-03-07 01:16:53.862 [INFO][5108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.194/32] ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.894907 containerd[1963]: 2026-03-07 01:16:53.862 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b4c16be479 ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.894907 containerd[1963]: 2026-03-07 01:16:53.866 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.894907 containerd[1963]: 2026-03-07 01:16:53.870 [INFO][5108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0", GenerateName:"calico-kube-controllers-b98f9b6fc-", Namespace:"calico-system", SelfLink:"", UID:"f8ecbf7f-4830-437f-b292-9cd1d51ae57e", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b98f9b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e", Pod:"calico-kube-controllers-b98f9b6fc-npvst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b4c16be479", MAC:"3e:5b:e8:a2:91:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:53.894907 containerd[1963]: 2026-03-07 01:16:53.889 [INFO][5108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e" Namespace="calico-system" Pod="calico-kube-controllers-b98f9b6fc-npvst" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:16:53.943626 containerd[1963]: time="2026-03-07T01:16:53.942267598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:53.943626 containerd[1963]: time="2026-03-07T01:16:53.943532800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:53.943626 containerd[1963]: time="2026-03-07T01:16:53.943556591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:53.944655 containerd[1963]: time="2026-03-07T01:16:53.943970531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:53.988213 systemd[1]: Started cri-containerd-86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e.scope - libcontainer container 86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e. Mar 7 01:16:54.011192 systemd-networkd[1862]: cali7b9fe9696ae: Link UP Mar 7 01:16:54.012846 systemd-networkd[1862]: cali7b9fe9696ae: Gained carrier Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.760 [INFO][5118] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0 goldmane-5b85766d88- calico-system d317c1e8-3726-4466-b652-a1bd0a0fc939 1065 0 2026-03-07 01:16:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-242 goldmane-5b85766d88-9r4d8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7b9fe9696ae [] [] }} ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.760 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.808 [INFO][5135] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" HandleID="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.820 [INFO][5135] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" HandleID="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-242", "pod":"goldmane-5b85766d88-9r4d8", "timestamp":"2026-03-07 01:16:53.808773435 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002db600)} Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.820 [INFO][5135] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.859 [INFO][5135] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.859 [INFO][5135] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.923 [INFO][5135] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.936 [INFO][5135] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.945 [INFO][5135] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.949 [INFO][5135] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.954 [INFO][5135] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.955 [INFO][5135] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.958 [INFO][5135] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.973 [INFO][5135] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.996 [INFO][5135] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.195/26] block=192.168.81.192/26 handle="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.996 [INFO][5135] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.195/26] handle="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" host="ip-172-31-20-242" Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.996 [INFO][5135] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:54.045287 containerd[1963]: 2026-03-07 01:16:53.997 [INFO][5135] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.195/26] IPv6=[] ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" HandleID="k8s-pod-network.09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.047684 containerd[1963]: 2026-03-07 01:16:54.003 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d317c1e8-3726-4466-b652-a1bd0a0fc939", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"goldmane-5b85766d88-9r4d8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9fe9696ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:54.047684 containerd[1963]: 2026-03-07 01:16:54.003 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.195/32] ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.047684 containerd[1963]: 2026-03-07 01:16:54.003 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b9fe9696ae ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.047684 containerd[1963]: 2026-03-07 01:16:54.014 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.047684 containerd[1963]: 2026-03-07 01:16:54.016 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d317c1e8-3726-4466-b652-a1bd0a0fc939", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e", Pod:"goldmane-5b85766d88-9r4d8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9fe9696ae", MAC:"ee:43:54:d5:1d:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:54.047684 containerd[1963]: 2026-03-07 01:16:54.037 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e" Namespace="calico-system" Pod="goldmane-5b85766d88-9r4d8" WorkloadEndpoint="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:16:54.096029 containerd[1963]: time="2026-03-07T01:16:54.094893289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:54.096264 containerd[1963]: time="2026-03-07T01:16:54.096225246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:54.096402 containerd[1963]: time="2026-03-07T01:16:54.096373229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.096763 containerd[1963]: time="2026-03-07T01:16:54.096685063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.149204 systemd[1]: Started cri-containerd-09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e.scope - libcontainer container 09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e. Mar 7 01:16:54.154444 containerd[1963]: time="2026-03-07T01:16:54.154333994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b98f9b6fc-npvst,Uid:f8ecbf7f-4830-437f-b292-9cd1d51ae57e,Namespace:calico-system,Attempt:1,} returns sandbox id \"86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e\"" Mar 7 01:16:54.158660 containerd[1963]: time="2026-03-07T01:16:54.158621742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:16:54.230209 containerd[1963]: time="2026-03-07T01:16:54.230152467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9r4d8,Uid:d317c1e8-3726-4466-b652-a1bd0a0fc939,Namespace:calico-system,Attempt:1,} returns sandbox id \"09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e\"" Mar 7 01:16:54.483953 containerd[1963]: time="2026-03-07T01:16:54.483743320Z" level=info msg="StopPodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\"" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.544 [INFO][5271] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.544 [INFO][5271] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" iface="eth0" netns="/var/run/netns/cni-9bcc7b8a-8736-ecaf-7e02-79c219ae13c0" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.544 [INFO][5271] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" iface="eth0" netns="/var/run/netns/cni-9bcc7b8a-8736-ecaf-7e02-79c219ae13c0" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.548 [INFO][5271] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" iface="eth0" netns="/var/run/netns/cni-9bcc7b8a-8736-ecaf-7e02-79c219ae13c0" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.548 [INFO][5271] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.548 [INFO][5271] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.579 [INFO][5279] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.579 [INFO][5279] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.579 [INFO][5279] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.586 [WARNING][5279] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.586 [INFO][5279] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.588 [INFO][5279] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:54.594016 containerd[1963]: 2026-03-07 01:16:54.590 [INFO][5271] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:16:54.594016 containerd[1963]: time="2026-03-07T01:16:54.593635370Z" level=info msg="TearDown network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" successfully" Mar 7 01:16:54.594016 containerd[1963]: time="2026-03-07T01:16:54.593667526Z" level=info msg="StopPodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" returns successfully" Mar 7 01:16:54.596115 containerd[1963]: time="2026-03-07T01:16:54.594572277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-xrvts,Uid:c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:54.656873 systemd[1]: run-netns-cni\x2d9bcc7b8a\x2d8736\x2decaf\x2d7e02\x2d79c219ae13c0.mount: Deactivated successfully. Mar 7 01:16:54.759209 systemd-networkd[1862]: cali309b23fb9a5: Link UP Mar 7 01:16:54.761013 systemd-networkd[1862]: cali309b23fb9a5: Gained carrier Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.664 [INFO][5287] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0 calico-apiserver-64c7867f4- calico-system c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5 1078 0 2026-03-07 01:16:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c7867f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-242 calico-apiserver-64c7867f4-xrvts eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali309b23fb9a5 [] [] }} ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.664 [INFO][5287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.702 [INFO][5298] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" HandleID="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.710 [INFO][5298] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" HandleID="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef710), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-242", "pod":"calico-apiserver-64c7867f4-xrvts", "timestamp":"2026-03-07 01:16:54.702061598 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b7080)} Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.710 [INFO][5298] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.710 [INFO][5298] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.710 [INFO][5298] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.713 [INFO][5298] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.718 [INFO][5298] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.728 [INFO][5298] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.731 [INFO][5298] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.733 [INFO][5298] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.733 [INFO][5298] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.736 [INFO][5298] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921 Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.742 [INFO][5298] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.751 [INFO][5298] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.196/26] block=192.168.81.192/26 handle="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.752 [INFO][5298] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.196/26] handle="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" host="ip-172-31-20-242" Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.752 [INFO][5298] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:54.789109 containerd[1963]: 2026-03-07 01:16:54.752 [INFO][5298] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.196/26] IPv6=[] ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" HandleID="k8s-pod-network.7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.790087 containerd[1963]: 2026-03-07 01:16:54.755 [INFO][5287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"calico-apiserver-64c7867f4-xrvts", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali309b23fb9a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:54.790087 containerd[1963]: 2026-03-07 01:16:54.755 [INFO][5287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.196/32] ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.790087 containerd[1963]: 2026-03-07 01:16:54.755 [INFO][5287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali309b23fb9a5 ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.790087 containerd[1963]: 2026-03-07 01:16:54.759 [INFO][5287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.790087 containerd[1963]: 2026-03-07 01:16:54.764 [INFO][5287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921", Pod:"calico-apiserver-64c7867f4-xrvts", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali309b23fb9a5", MAC:"e6:fb:e0:fe:de:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:54.790087 containerd[1963]: 2026-03-07 01:16:54.785 [INFO][5287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-xrvts" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:16:54.838493 containerd[1963]: time="2026-03-07T01:16:54.838360907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:54.838705 containerd[1963]: time="2026-03-07T01:16:54.838456923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:54.838705 containerd[1963]: time="2026-03-07T01:16:54.838483079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.838705 containerd[1963]: time="2026-03-07T01:16:54.838587704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.890220 systemd[1]: Started cri-containerd-7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921.scope - libcontainer container 7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921. Mar 7 01:16:54.973603 containerd[1963]: time="2026-03-07T01:16:54.973549957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-xrvts,Uid:c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921\"" Mar 7 01:16:55.279394 systemd-networkd[1862]: cali7b9fe9696ae: Gained IPv6LL Mar 7 01:16:55.404269 systemd[1]: Started sshd@8-172.31.20.242:22-68.220.241.50:59944.service - OpenSSH per-connection server daemon (68.220.241.50:59944). Mar 7 01:16:55.480228 containerd[1963]: time="2026-03-07T01:16:55.480096427Z" level=info msg="StopPodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\"" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.569 [INFO][5382] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.571 [INFO][5382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" iface="eth0" netns="/var/run/netns/cni-003f65b0-7719-4835-6c26-7896412a4eef" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.573 [INFO][5382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" iface="eth0" netns="/var/run/netns/cni-003f65b0-7719-4835-6c26-7896412a4eef" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.574 [INFO][5382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" iface="eth0" netns="/var/run/netns/cni-003f65b0-7719-4835-6c26-7896412a4eef" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.574 [INFO][5382] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.574 [INFO][5382] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.621 [INFO][5390] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.622 [INFO][5390] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.622 [INFO][5390] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.630 [WARNING][5390] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.630 [INFO][5390] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.632 [INFO][5390] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:55.641135 containerd[1963]: 2026-03-07 01:16:55.637 [INFO][5382] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:16:55.643590 containerd[1963]: time="2026-03-07T01:16:55.643326455Z" level=info msg="TearDown network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" successfully" Mar 7 01:16:55.643590 containerd[1963]: time="2026-03-07T01:16:55.643367141Z" level=info msg="StopPodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" returns successfully" Mar 7 01:16:55.644845 containerd[1963]: time="2026-03-07T01:16:55.644711881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lld45,Uid:e240c855-c8b1-420d-93db-8b8e45e00b2c,Namespace:kube-system,Attempt:1,}" Mar 7 01:16:55.648109 systemd[1]: run-netns-cni\x2d003f65b0\x2d7719\x2d4835\x2d6c26\x2d7896412a4eef.mount: Deactivated successfully. Mar 7 01:16:55.855569 systemd-networkd[1862]: cali3b4c16be479: Gained IPv6LL Mar 7 01:16:55.861123 systemd-networkd[1862]: cali1c06797789c: Link UP Mar 7 01:16:55.861630 systemd-networkd[1862]: cali1c06797789c: Gained carrier Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.741 [INFO][5397] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0 coredns-674b8bbfcf- kube-system e240c855-c8b1-420d-93db-8b8e45e00b2c 1087 0 2026-03-07 01:15:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-242 coredns-674b8bbfcf-lld45 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c06797789c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.741 [INFO][5397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.783 [INFO][5408] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" HandleID="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.792 [INFO][5408] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" HandleID="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-242", "pod":"coredns-674b8bbfcf-lld45", "timestamp":"2026-03-07 01:16:55.783891572 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002db600)} Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.792 [INFO][5408] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.793 [INFO][5408] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.793 [INFO][5408] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.798 [INFO][5408] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.808 [INFO][5408] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.817 [INFO][5408] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.820 [INFO][5408] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.825 [INFO][5408] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.825 [INFO][5408] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.828 [INFO][5408] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.837 [INFO][5408] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.849 [INFO][5408] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.197/26] block=192.168.81.192/26 handle="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.849 [INFO][5408] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.197/26] handle="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" host="ip-172-31-20-242" Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.849 [INFO][5408] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:55.904325 containerd[1963]: 2026-03-07 01:16:55.849 [INFO][5408] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.197/26] IPv6=[] ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" HandleID="k8s-pod-network.71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.908605 containerd[1963]: 2026-03-07 01:16:55.852 [INFO][5397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e240c855-c8b1-420d-93db-8b8e45e00b2c", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"coredns-674b8bbfcf-lld45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c06797789c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:55.908605 containerd[1963]: 2026-03-07 01:16:55.853 [INFO][5397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.197/32] ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.908605 containerd[1963]: 2026-03-07 01:16:55.853 [INFO][5397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c06797789c ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.908605 containerd[1963]: 2026-03-07 01:16:55.866 [INFO][5397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.908605 containerd[1963]: 2026-03-07 01:16:55.866 [INFO][5397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e240c855-c8b1-420d-93db-8b8e45e00b2c", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b", Pod:"coredns-674b8bbfcf-lld45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c06797789c", MAC:"ce:6b:db:0a:06:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:55.908605 containerd[1963]: 2026-03-07 01:16:55.896 [INFO][5397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b" Namespace="kube-system" Pod="coredns-674b8bbfcf-lld45" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:16:55.910972 sshd[5371]: Accepted publickey for core from 68.220.241.50 port 59944 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:55.914677 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:55.928447 systemd-logind[1957]: New session 9 of user core. Mar 7 01:16:55.936444 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:16:55.989143 containerd[1963]: time="2026-03-07T01:16:55.987791791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:55.989143 containerd[1963]: time="2026-03-07T01:16:55.988386145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:55.989143 containerd[1963]: time="2026-03-07T01:16:55.988408451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:55.989143 containerd[1963]: time="2026-03-07T01:16:55.988536340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:56.042227 systemd[1]: Started cri-containerd-71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b.scope - libcontainer container 71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b. Mar 7 01:16:56.111164 systemd-networkd[1862]: cali309b23fb9a5: Gained IPv6LL Mar 7 01:16:56.139373 containerd[1963]: time="2026-03-07T01:16:56.139313384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lld45,Uid:e240c855-c8b1-420d-93db-8b8e45e00b2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b\"" Mar 7 01:16:56.152423 containerd[1963]: time="2026-03-07T01:16:56.152285483Z" level=info msg="CreateContainer within sandbox \"71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:16:56.264940 containerd[1963]: time="2026-03-07T01:16:56.260883402Z" level=info msg="CreateContainer within sandbox \"71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df68affa6f85092d08bc432ed90c7e7f2f99bc94f13966fe461c1a849d5021ae\"" Mar 7 01:16:56.264940 containerd[1963]: time="2026-03-07T01:16:56.262046660Z" level=info msg="StartContainer for \"df68affa6f85092d08bc432ed90c7e7f2f99bc94f13966fe461c1a849d5021ae\"" Mar 7 01:16:56.345962 systemd[1]: Started cri-containerd-df68affa6f85092d08bc432ed90c7e7f2f99bc94f13966fe461c1a849d5021ae.scope - libcontainer container df68affa6f85092d08bc432ed90c7e7f2f99bc94f13966fe461c1a849d5021ae. Mar 7 01:16:56.398209 sshd[5371]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:56.407471 systemd[1]: sshd@8-172.31.20.242:22-68.220.241.50:59944.service: Deactivated successfully. Mar 7 01:16:56.412815 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:16:56.418506 systemd-logind[1957]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:16:56.425463 systemd-logind[1957]: Removed session 9. Mar 7 01:16:56.442276 containerd[1963]: time="2026-03-07T01:16:56.442231424Z" level=info msg="StartContainer for \"df68affa6f85092d08bc432ed90c7e7f2f99bc94f13966fe461c1a849d5021ae\" returns successfully" Mar 7 01:16:56.481183 containerd[1963]: time="2026-03-07T01:16:56.480969711Z" level=info msg="StopPodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\"" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.605 [INFO][5531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.605 [INFO][5531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" iface="eth0" netns="/var/run/netns/cni-0cec39f4-ca5b-601c-3214-f2019f44604d" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.609 [INFO][5531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" iface="eth0" netns="/var/run/netns/cni-0cec39f4-ca5b-601c-3214-f2019f44604d" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.610 [INFO][5531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" iface="eth0" netns="/var/run/netns/cni-0cec39f4-ca5b-601c-3214-f2019f44604d" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.610 [INFO][5531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.610 [INFO][5531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.706 [INFO][5542] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.707 [INFO][5542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.707 [INFO][5542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.728 [WARNING][5542] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.728 [INFO][5542] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.733 [INFO][5542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.746910 containerd[1963]: 2026-03-07 01:16:56.740 [INFO][5531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:16:56.751226 containerd[1963]: time="2026-03-07T01:16:56.748045084Z" level=info msg="TearDown network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" successfully" Mar 7 01:16:56.751226 containerd[1963]: time="2026-03-07T01:16:56.748080787Z" level=info msg="StopPodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" returns successfully" Mar 7 01:16:56.751226 containerd[1963]: time="2026-03-07T01:16:56.749954638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hx6bb,Uid:f74be19c-3e5a-4ec1-971f-5534f5ca8d72,Namespace:kube-system,Attempt:1,}" Mar 7 01:16:56.757582 systemd[1]: run-netns-cni\x2d0cec39f4\x2dca5b\x2d601c\x2d3214\x2df2019f44604d.mount: Deactivated successfully. Mar 7 01:16:57.091318 systemd-networkd[1862]: calid39b80d6526: Link UP Mar 7 01:16:57.091903 systemd-networkd[1862]: calid39b80d6526: Gained carrier Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.907 [INFO][5557] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0 coredns-674b8bbfcf- kube-system f74be19c-3e5a-4ec1-971f-5534f5ca8d72 1096 0 2026-03-07 01:15:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-242 coredns-674b8bbfcf-hx6bb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid39b80d6526 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.907 [INFO][5557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.983 [INFO][5573] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" HandleID="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.999 [INFO][5573] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" HandleID="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367ec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-242", "pod":"coredns-674b8bbfcf-hx6bb", "timestamp":"2026-03-07 01:16:56.983278357 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e86e0)} Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.999 [INFO][5573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.999 [INFO][5573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:56.999 [INFO][5573] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.005 [INFO][5573] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.015 [INFO][5573] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.030 [INFO][5573] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.036 [INFO][5573] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.040 [INFO][5573] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.040 [INFO][5573] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.044 [INFO][5573] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.054 [INFO][5573] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.075 [INFO][5573] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.198/26] block=192.168.81.192/26 handle="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.075 [INFO][5573] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.198/26] handle="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" host="ip-172-31-20-242" Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.075 [INFO][5573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.154169 containerd[1963]: 2026-03-07 01:16:57.075 [INFO][5573] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.198/26] IPv6=[] ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" HandleID="k8s-pod-network.9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.155822 containerd[1963]: 2026-03-07 01:16:57.083 [INFO][5557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f74be19c-3e5a-4ec1-971f-5534f5ca8d72", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"coredns-674b8bbfcf-hx6bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid39b80d6526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.155822 containerd[1963]: 2026-03-07 01:16:57.086 [INFO][5557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.198/32] ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.155822 containerd[1963]: 2026-03-07 01:16:57.086 [INFO][5557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid39b80d6526 ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.155822 containerd[1963]: 2026-03-07 01:16:57.092 [INFO][5557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.155822 containerd[1963]: 2026-03-07 01:16:57.093 [INFO][5557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f74be19c-3e5a-4ec1-971f-5534f5ca8d72", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f", Pod:"coredns-674b8bbfcf-hx6bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid39b80d6526", MAC:"f6:8f:4d:10:00:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.155822 containerd[1963]: 2026-03-07 01:16:57.142 [INFO][5557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hx6bb" WorkloadEndpoint="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:16:57.377005 kubelet[3191]: I0307 01:16:57.375898 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lld45" podStartSLOduration=60.375865818 podStartE2EDuration="1m0.375865818s" podCreationTimestamp="2026-03-07 01:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:57.309537797 +0000 UTC m=+67.010604150" watchObservedRunningTime="2026-03-07 01:16:57.375865818 +0000 UTC m=+67.076932165" Mar 7 01:16:57.455780 systemd-networkd[1862]: cali1c06797789c: Gained IPv6LL Mar 7 01:16:57.467309 containerd[1963]: time="2026-03-07T01:16:57.462910193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:57.467309 containerd[1963]: time="2026-03-07T01:16:57.463075910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:57.467309 containerd[1963]: time="2026-03-07T01:16:57.463107683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:57.467309 containerd[1963]: time="2026-03-07T01:16:57.463252396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:57.490915 containerd[1963]: time="2026-03-07T01:16:57.490784033Z" level=info msg="StopPodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\"" Mar 7 01:16:57.491844 containerd[1963]: time="2026-03-07T01:16:57.491240285Z" level=info msg="StopPodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\"" Mar 7 01:16:57.649221 systemd[1]: Started cri-containerd-9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f.scope - libcontainer container 9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f. Mar 7 01:16:57.798628 containerd[1963]: time="2026-03-07T01:16:57.798583496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hx6bb,Uid:f74be19c-3e5a-4ec1-971f-5534f5ca8d72,Namespace:kube-system,Attempt:1,} returns sandbox id \"9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f\"" Mar 7 01:16:57.836454 containerd[1963]: time="2026-03-07T01:16:57.836407882Z" level=info msg="CreateContainer within sandbox \"9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:16:57.889574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309326610.mount: Deactivated successfully. Mar 7 01:16:57.894255 containerd[1963]: time="2026-03-07T01:16:57.894211980Z" level=info msg="CreateContainer within sandbox \"9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad465999d3d42568263166be13cc0ba3597332c00130768f68636c75d167078b\"" Mar 7 01:16:57.897367 containerd[1963]: time="2026-03-07T01:16:57.897324142Z" level=info msg="StartContainer for \"ad465999d3d42568263166be13cc0ba3597332c00130768f68636c75d167078b\"" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.747 [INFO][5639] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.752 [INFO][5639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" iface="eth0" netns="/var/run/netns/cni-87980d72-e952-7704-db15-b66567e72737" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.752 [INFO][5639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" iface="eth0" netns="/var/run/netns/cni-87980d72-e952-7704-db15-b66567e72737" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.752 [INFO][5639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" iface="eth0" netns="/var/run/netns/cni-87980d72-e952-7704-db15-b66567e72737" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.752 [INFO][5639] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.752 [INFO][5639] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.899 [INFO][5670] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.899 [INFO][5670] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.899 [INFO][5670] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.920 [WARNING][5670] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.920 [INFO][5670] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.931 [INFO][5670] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.970125 containerd[1963]: 2026-03-07 01:16:57.947 [INFO][5639] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:16:57.973036 containerd[1963]: time="2026-03-07T01:16:57.972051341Z" level=info msg="TearDown network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" successfully" Mar 7 01:16:57.973036 containerd[1963]: time="2026-03-07T01:16:57.972090082Z" level=info msg="StopPodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" returns successfully" Mar 7 01:16:57.975491 containerd[1963]: time="2026-03-07T01:16:57.975435980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kdrcv,Uid:9aed3645-1ca9-4273-9dbb-5a5fa746e5c3,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:57.981443 systemd[1]: run-netns-cni\x2d87980d72\x2de952\x2d7704\x2ddb15\x2db66567e72737.mount: Deactivated successfully. Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.794 [INFO][5638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.795 [INFO][5638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" iface="eth0" netns="/var/run/netns/cni-28f224c8-fc7c-25fd-4bd8-38604f9007ac" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.796 [INFO][5638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" iface="eth0" netns="/var/run/netns/cni-28f224c8-fc7c-25fd-4bd8-38604f9007ac" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.801 [INFO][5638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" iface="eth0" netns="/var/run/netns/cni-28f224c8-fc7c-25fd-4bd8-38604f9007ac" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.801 [INFO][5638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.801 [INFO][5638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.907 [INFO][5682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.908 [INFO][5682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.935 [INFO][5682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.958 [WARNING][5682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.958 [INFO][5682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.963 [INFO][5682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.990482 containerd[1963]: 2026-03-07 01:16:57.968 [INFO][5638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:16:57.996589 containerd[1963]: time="2026-03-07T01:16:57.996041631Z" level=info msg="TearDown network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" successfully" Mar 7 01:16:57.996589 containerd[1963]: time="2026-03-07T01:16:57.996087954Z" level=info msg="StopPodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" returns successfully" Mar 7 01:16:57.998435 containerd[1963]: time="2026-03-07T01:16:57.997350496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-fd5dm,Uid:c384657e-2841-43d4-89d4-ff693e5014b6,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:57.996774 systemd[1]: run-netns-cni\x2d28f224c8\x2dfc7c\x2d25fd\x2d4bd8\x2d38604f9007ac.mount: Deactivated successfully. Mar 7 01:16:58.027224 systemd[1]: Started cri-containerd-ad465999d3d42568263166be13cc0ba3597332c00130768f68636c75d167078b.scope - libcontainer container ad465999d3d42568263166be13cc0ba3597332c00130768f68636c75d167078b. Mar 7 01:16:58.101293 containerd[1963]: time="2026-03-07T01:16:58.100514135Z" level=info msg="StartContainer for \"ad465999d3d42568263166be13cc0ba3597332c00130768f68636c75d167078b\" returns successfully" Mar 7 01:16:58.329509 kubelet[3191]: I0307 01:16:58.328852 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hx6bb" podStartSLOduration=61.328826026 podStartE2EDuration="1m1.328826026s" podCreationTimestamp="2026-03-07 01:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:58.323534076 +0000 UTC m=+68.024600425" watchObservedRunningTime="2026-03-07 01:16:58.328826026 +0000 UTC m=+68.029892373" Mar 7 01:16:58.469835 systemd-networkd[1862]: cali03104dcc0e6: Link UP Mar 7 01:16:58.473272 systemd-networkd[1862]: cali03104dcc0e6: Gained carrier Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.152 [INFO][5727] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0 calico-apiserver-64c7867f4- calico-system c384657e-2841-43d4-89d4-ff693e5014b6 1114 0 2026-03-07 01:16:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c7867f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-242 calico-apiserver-64c7867f4-fd5dm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali03104dcc0e6 [] [] }} ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.154 [INFO][5727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.302 [INFO][5751] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" HandleID="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.342 [INFO][5751] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" HandleID="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102930), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-242", "pod":"calico-apiserver-64c7867f4-fd5dm", "timestamp":"2026-03-07 01:16:58.302948451 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001926e0)} Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.342 [INFO][5751] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.342 [INFO][5751] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.342 [INFO][5751] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.353 [INFO][5751] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.372 [INFO][5751] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.389 [INFO][5751] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.395 [INFO][5751] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.405 [INFO][5751] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.407 [INFO][5751] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.411 [INFO][5751] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.422 [INFO][5751] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.447 [INFO][5751] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.199/26] block=192.168.81.192/26 handle="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.447 [INFO][5751] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.199/26] handle="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" host="ip-172-31-20-242" Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.447 [INFO][5751] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:58.525005 containerd[1963]: 2026-03-07 01:16:58.447 [INFO][5751] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.199/26] IPv6=[] ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" HandleID="k8s-pod-network.6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.526542 containerd[1963]: 2026-03-07 01:16:58.451 [INFO][5727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c384657e-2841-43d4-89d4-ff693e5014b6", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"calico-apiserver-64c7867f4-fd5dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03104dcc0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:58.526542 containerd[1963]: 2026-03-07 01:16:58.452 [INFO][5727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.199/32] ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.526542 containerd[1963]: 2026-03-07 01:16:58.452 [INFO][5727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03104dcc0e6 ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.526542 containerd[1963]: 2026-03-07 01:16:58.473 [INFO][5727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.526542 containerd[1963]: 2026-03-07 01:16:58.474 [INFO][5727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c384657e-2841-43d4-89d4-ff693e5014b6", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad", Pod:"calico-apiserver-64c7867f4-fd5dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03104dcc0e6", MAC:"ce:7e:94:6d:0d:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:58.526542 containerd[1963]: 2026-03-07 01:16:58.510 [INFO][5727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad" Namespace="calico-system" Pod="calico-apiserver-64c7867f4-fd5dm" WorkloadEndpoint="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:16:58.546458 systemd-networkd[1862]: calid39b80d6526: Gained IPv6LL Mar 7 01:16:58.687104 systemd-networkd[1862]: calid98f3a30368: Link UP Mar 7 01:16:58.702322 containerd[1963]: time="2026-03-07T01:16:58.701714566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:58.702322 containerd[1963]: time="2026-03-07T01:16:58.701792729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:58.702322 containerd[1963]: time="2026-03-07T01:16:58.701820466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:58.705440 containerd[1963]: time="2026-03-07T01:16:58.703769356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:58.704361 systemd-networkd[1862]: calid98f3a30368: Gained carrier Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.208 [INFO][5710] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0 csi-node-driver- calico-system 9aed3645-1ca9-4273-9dbb-5a5fa746e5c3 1113 0 2026-03-07 01:16:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-242 csi-node-driver-kdrcv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid98f3a30368 [] [] }} ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.208 [INFO][5710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.437 [INFO][5761] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" HandleID="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.480 [INFO][5761] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" HandleID="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-242", "pod":"csi-node-driver-kdrcv", "timestamp":"2026-03-07 01:16:58.437023701 +0000 UTC"}, Hostname:"ip-172-31-20-242", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00035cb00)} Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.480 [INFO][5761] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.480 [INFO][5761] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.480 [INFO][5761] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-242' Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.496 [INFO][5761] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.521 [INFO][5761] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.555 [INFO][5761] ipam/ipam.go 526: Trying affinity for 192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.562 [INFO][5761] ipam/ipam.go 160: Attempting to load block cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.570 [INFO][5761] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.570 [INFO][5761] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.576 [INFO][5761] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837 Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.594 [INFO][5761] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.627 [INFO][5761] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.81.200/26] block=192.168.81.192/26 handle="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.627 [INFO][5761] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.81.200/26] handle="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" host="ip-172-31-20-242" Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.627 [INFO][5761] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:58.773735 containerd[1963]: 2026-03-07 01:16:58.627 [INFO][5761] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.81.200/26] IPv6=[] ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" HandleID="k8s-pod-network.c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.774798 containerd[1963]: 2026-03-07 01:16:58.640 [INFO][5710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"", Pod:"csi-node-driver-kdrcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid98f3a30368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:58.774798 containerd[1963]: 2026-03-07 01:16:58.641 [INFO][5710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.200/32] ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.774798 containerd[1963]: 2026-03-07 01:16:58.641 [INFO][5710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid98f3a30368 ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.774798 containerd[1963]: 2026-03-07 01:16:58.701 [INFO][5710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.774798 containerd[1963]: 2026-03-07 01:16:58.716 [INFO][5710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837", Pod:"csi-node-driver-kdrcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid98f3a30368", MAC:"d2:d8:a0:c1:05:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:58.774798 containerd[1963]: 2026-03-07 01:16:58.749 [INFO][5710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837" Namespace="calico-system" Pod="csi-node-driver-kdrcv" WorkloadEndpoint="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:16:58.808993 systemd[1]: Started cri-containerd-6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad.scope - libcontainer container 6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad. Mar 7 01:16:58.872246 containerd[1963]: time="2026-03-07T01:16:58.871734037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:58.872246 containerd[1963]: time="2026-03-07T01:16:58.871839592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:58.872246 containerd[1963]: time="2026-03-07T01:16:58.871860364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:58.872246 containerd[1963]: time="2026-03-07T01:16:58.872001576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:58.976162 systemd[1]: Started cri-containerd-c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837.scope - libcontainer container c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837. Mar 7 01:16:59.041007 containerd[1963]: time="2026-03-07T01:16:59.040920929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c7867f4-fd5dm,Uid:c384657e-2841-43d4-89d4-ff693e5014b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad\"" Mar 7 01:16:59.122056 containerd[1963]: time="2026-03-07T01:16:59.121967954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kdrcv,Uid:9aed3645-1ca9-4273-9dbb-5a5fa746e5c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837\"" Mar 7 01:16:59.680570 containerd[1963]: time="2026-03-07T01:16:59.680509116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:59.682734 containerd[1963]: time="2026-03-07T01:16:59.682677231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:16:59.685309 containerd[1963]: time="2026-03-07T01:16:59.684898808Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:59.689951 containerd[1963]: time="2026-03-07T01:16:59.689897593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:59.691371 containerd[1963]: time="2026-03-07T01:16:59.691021462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.532310771s" Mar 7 01:16:59.691371 containerd[1963]: time="2026-03-07T01:16:59.691073569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:16:59.717584 containerd[1963]: time="2026-03-07T01:16:59.716649375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:16:59.827196 containerd[1963]: time="2026-03-07T01:16:59.827130033Z" level=info msg="CreateContainer within sandbox \"86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:16:59.861885 containerd[1963]: time="2026-03-07T01:16:59.861822178Z" level=info msg="CreateContainer within sandbox \"86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21\"" Mar 7 01:16:59.864051 containerd[1963]: time="2026-03-07T01:16:59.862736054Z" level=info msg="StartContainer for \"89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21\"" Mar 7 01:16:59.907634 systemd[1]: run-containerd-runc-k8s.io-89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21-runc.u7gJWJ.mount: Deactivated successfully. Mar 7 01:16:59.923496 systemd[1]: Started cri-containerd-89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21.scope - libcontainer container 89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21. Mar 7 01:16:59.951239 systemd-networkd[1862]: cali03104dcc0e6: Gained IPv6LL Mar 7 01:17:00.011895 containerd[1963]: time="2026-03-07T01:17:00.011729750Z" level=info msg="StartContainer for \"89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21\" returns successfully" Mar 7 01:17:00.147748 systemd-networkd[1862]: calid98f3a30368: Gained IPv6LL Mar 7 01:17:00.689221 kubelet[3191]: I0307 01:17:00.689133 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b98f9b6fc-npvst" podStartSLOduration=40.126614141 podStartE2EDuration="45.674333006s" podCreationTimestamp="2026-03-07 01:16:15 +0000 UTC" firstStartedPulling="2026-03-07 01:16:54.158008448 +0000 UTC m=+63.859074783" lastFinishedPulling="2026-03-07 01:16:59.705727298 +0000 UTC m=+69.406793648" observedRunningTime="2026-03-07 01:17:00.401867961 +0000 UTC m=+70.102934308" watchObservedRunningTime="2026-03-07 01:17:00.674333006 +0000 UTC m=+70.375399355" Mar 7 01:17:00.877447 systemd[1]: run-containerd-runc-k8s.io-89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21-runc.uEMAWb.mount: Deactivated successfully. Mar 7 01:17:01.600562 systemd[1]: Started sshd@9-172.31.20.242:22-68.220.241.50:59946.service - OpenSSH per-connection server daemon (68.220.241.50:59946). Mar 7 01:17:02.336084 ntpd[1950]: Listen normally on 10 cali3b4c16be479 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:17:02.336387 ntpd[1950]: Listen normally on 11 cali7b9fe9696ae [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 10 cali3b4c16be479 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 11 cali7b9fe9696ae [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 12 cali309b23fb9a5 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 13 cali1c06797789c [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 14 calid39b80d6526 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 15 cali03104dcc0e6 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 7 01:17:02.336827 ntpd[1950]: 7 Mar 01:17:02 ntpd[1950]: Listen normally on 16 calid98f3a30368 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 01:17:02.336452 ntpd[1950]: Listen normally on 12 cali309b23fb9a5 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:17:02.336499 ntpd[1950]: Listen normally on 13 cali1c06797789c [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 01:17:02.336585 ntpd[1950]: Listen normally on 14 calid39b80d6526 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 7 01:17:02.336627 ntpd[1950]: Listen normally on 15 cali03104dcc0e6 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 7 01:17:02.336665 ntpd[1950]: Listen normally on 16 calid98f3a30368 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 01:17:02.537601 sshd[5970]: Accepted publickey for core from 68.220.241.50 port 59946 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:02.544116 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:02.557886 systemd-logind[1957]: New session 10 of user core. Mar 7 01:17:02.568349 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:17:03.474236 sshd[5970]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:03.478897 systemd-logind[1957]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:17:03.480186 systemd[1]: sshd@9-172.31.20.242:22-68.220.241.50:59946.service: Deactivated successfully. Mar 7 01:17:03.482571 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:17:03.486372 systemd-logind[1957]: Removed session 10. Mar 7 01:17:04.716714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022239973.mount: Deactivated successfully. Mar 7 01:17:05.482414 containerd[1963]: time="2026-03-07T01:17:05.482355614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:05.484377 containerd[1963]: time="2026-03-07T01:17:05.484274167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:17:05.486659 containerd[1963]: time="2026-03-07T01:17:05.486585121Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:05.490787 containerd[1963]: time="2026-03-07T01:17:05.490731772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:05.528838 containerd[1963]: time="2026-03-07T01:17:05.528662678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.811962974s" Mar 7 01:17:05.528838 containerd[1963]: time="2026-03-07T01:17:05.528719193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:17:05.569486 containerd[1963]: time="2026-03-07T01:17:05.569232083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:17:05.608889 containerd[1963]: time="2026-03-07T01:17:05.608788335Z" level=info msg="CreateContainer within sandbox \"09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:17:05.760283 containerd[1963]: time="2026-03-07T01:17:05.759221458Z" level=info msg="CreateContainer within sandbox \"09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284\"" Mar 7 01:17:05.763280 containerd[1963]: time="2026-03-07T01:17:05.763239705Z" level=info msg="StartContainer for \"006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284\"" Mar 7 01:17:05.845262 systemd[1]: Started cri-containerd-006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284.scope - libcontainer container 006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284. Mar 7 01:17:05.947413 containerd[1963]: time="2026-03-07T01:17:05.946965765Z" level=info msg="StartContainer for \"006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284\" returns successfully" Mar 7 01:17:06.976448 kubelet[3191]: I0307 01:17:06.946863 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-9r4d8" podStartSLOduration=41.603587011 podStartE2EDuration="52.919048672s" podCreationTimestamp="2026-03-07 01:16:14 +0000 UTC" firstStartedPulling="2026-03-07 01:16:54.231693171 +0000 UTC m=+63.932759505" lastFinishedPulling="2026-03-07 01:17:05.547154825 +0000 UTC m=+75.248221166" observedRunningTime="2026-03-07 01:17:06.787308166 +0000 UTC m=+76.488374516" watchObservedRunningTime="2026-03-07 01:17:06.919048672 +0000 UTC m=+76.620115020" Mar 7 01:17:07.731501 systemd[1]: run-containerd-runc-k8s.io-006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284-runc.PZcS9s.mount: Deactivated successfully. Mar 7 01:17:08.572861 systemd[1]: Started sshd@10-172.31.20.242:22-68.220.241.50:43876.service - OpenSSH per-connection server daemon (68.220.241.50:43876). Mar 7 01:17:09.184757 sshd[6115]: Accepted publickey for core from 68.220.241.50 port 43876 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:09.186217 containerd[1963]: time="2026-03-07T01:17:09.185118564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.188935 containerd[1963]: time="2026-03-07T01:17:09.188039657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:17:09.188387 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:09.190423 containerd[1963]: time="2026-03-07T01:17:09.190364264Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.197881 containerd[1963]: time="2026-03-07T01:17:09.197449665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.201078 containerd[1963]: time="2026-03-07T01:17:09.201031784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.631736033s" Mar 7 01:17:09.201876 containerd[1963]: time="2026-03-07T01:17:09.201195225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:17:09.206549 systemd-logind[1957]: New session 11 of user core. Mar 7 01:17:09.210194 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:17:09.223097 containerd[1963]: time="2026-03-07T01:17:09.222881248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:17:09.320344 containerd[1963]: time="2026-03-07T01:17:09.320293066Z" level=info msg="CreateContainer within sandbox \"7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:17:09.361548 containerd[1963]: time="2026-03-07T01:17:09.361487785Z" level=info msg="CreateContainer within sandbox \"7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"269c688fd8aa46ff189671d04101163e166a34ba21ee931c925d36d999caa3b3\"" Mar 7 01:17:09.366134 containerd[1963]: time="2026-03-07T01:17:09.365950772Z" level=info msg="StartContainer for \"269c688fd8aa46ff189671d04101163e166a34ba21ee931c925d36d999caa3b3\"" Mar 7 01:17:09.535942 systemd[1]: Started cri-containerd-269c688fd8aa46ff189671d04101163e166a34ba21ee931c925d36d999caa3b3.scope - libcontainer container 269c688fd8aa46ff189671d04101163e166a34ba21ee931c925d36d999caa3b3. Mar 7 01:17:09.575549 containerd[1963]: time="2026-03-07T01:17:09.575050169Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.579010 containerd[1963]: time="2026-03-07T01:17:09.578538839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:17:09.583068 containerd[1963]: time="2026-03-07T01:17:09.583024382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 360.048162ms" Mar 7 01:17:09.583272 containerd[1963]: time="2026-03-07T01:17:09.583248819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:17:09.651195 containerd[1963]: time="2026-03-07T01:17:09.651146849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:17:09.672135 containerd[1963]: time="2026-03-07T01:17:09.672091800Z" level=info msg="StartContainer for \"269c688fd8aa46ff189671d04101163e166a34ba21ee931c925d36d999caa3b3\" returns successfully" Mar 7 01:17:09.674620 containerd[1963]: time="2026-03-07T01:17:09.674395391Z" level=info msg="CreateContainer within sandbox \"6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:17:09.715350 containerd[1963]: time="2026-03-07T01:17:09.715193478Z" level=info msg="CreateContainer within sandbox \"6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"72629e03bcb703581ce3168f002060832270129451247bdc7c79dd997c481fdd\"" Mar 7 01:17:09.738340 containerd[1963]: time="2026-03-07T01:17:09.735043988Z" level=info msg="StartContainer for \"72629e03bcb703581ce3168f002060832270129451247bdc7c79dd997c481fdd\"" Mar 7 01:17:09.782275 kubelet[3191]: I0307 01:17:09.771569 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-64c7867f4-xrvts" podStartSLOduration=42.512494501 podStartE2EDuration="56.771539081s" podCreationTimestamp="2026-03-07 01:16:13 +0000 UTC" firstStartedPulling="2026-03-07 01:16:54.975357594 +0000 UTC m=+64.676423919" lastFinishedPulling="2026-03-07 01:17:09.234402136 +0000 UTC m=+78.935468499" observedRunningTime="2026-03-07 01:17:09.769499271 +0000 UTC m=+79.470565618" watchObservedRunningTime="2026-03-07 01:17:09.771539081 +0000 UTC m=+79.472605429" Mar 7 01:17:09.829052 systemd[1]: Started cri-containerd-72629e03bcb703581ce3168f002060832270129451247bdc7c79dd997c481fdd.scope - libcontainer container 72629e03bcb703581ce3168f002060832270129451247bdc7c79dd997c481fdd. Mar 7 01:17:09.977222 containerd[1963]: time="2026-03-07T01:17:09.976393686Z" level=info msg="StartContainer for \"72629e03bcb703581ce3168f002060832270129451247bdc7c79dd997c481fdd\" returns successfully" Mar 7 01:17:10.813552 kubelet[3191]: I0307 01:17:10.773836 3191 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:10.853800 sshd[6115]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:10.865933 systemd[1]: sshd@10-172.31.20.242:22-68.220.241.50:43876.service: Deactivated successfully. Mar 7 01:17:10.869015 systemd-logind[1957]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:17:10.878955 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:17:10.883011 systemd-logind[1957]: Removed session 11. Mar 7 01:17:10.950404 systemd[1]: Started sshd@11-172.31.20.242:22-68.220.241.50:43878.service - OpenSSH per-connection server daemon (68.220.241.50:43878). Mar 7 01:17:11.519113 sshd[6222]: Accepted publickey for core from 68.220.241.50 port 43878 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:11.522621 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:11.533571 systemd-logind[1957]: New session 12 of user core. Mar 7 01:17:11.539228 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:17:11.664155 containerd[1963]: time="2026-03-07T01:17:11.664103146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:11.666052 containerd[1963]: time="2026-03-07T01:17:11.665999231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:17:11.670286 containerd[1963]: time="2026-03-07T01:17:11.670242316Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:11.676270 containerd[1963]: time="2026-03-07T01:17:11.676221191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:11.678497 containerd[1963]: time="2026-03-07T01:17:11.678426695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.027218731s" Mar 7 01:17:11.678497 containerd[1963]: time="2026-03-07T01:17:11.678478201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:17:11.691289 containerd[1963]: time="2026-03-07T01:17:11.691242954Z" level=info msg="CreateContainer within sandbox \"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:17:11.751801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725465452.mount: Deactivated successfully. Mar 7 01:17:11.777863 containerd[1963]: time="2026-03-07T01:17:11.777103598Z" level=info msg="CreateContainer within sandbox \"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"744ca1a6201fbbfb8b49d4dbf98926b9cd78ae220bebd1d3f57e764ad5e7ad94\"" Mar 7 01:17:11.779823 containerd[1963]: time="2026-03-07T01:17:11.778036064Z" level=info msg="StartContainer for \"744ca1a6201fbbfb8b49d4dbf98926b9cd78ae220bebd1d3f57e764ad5e7ad94\"" Mar 7 01:17:11.886536 systemd[1]: Started cri-containerd-744ca1a6201fbbfb8b49d4dbf98926b9cd78ae220bebd1d3f57e764ad5e7ad94.scope - libcontainer container 744ca1a6201fbbfb8b49d4dbf98926b9cd78ae220bebd1d3f57e764ad5e7ad94. Mar 7 01:17:12.004646 containerd[1963]: time="2026-03-07T01:17:12.004580570Z" level=info msg="StartContainer for \"744ca1a6201fbbfb8b49d4dbf98926b9cd78ae220bebd1d3f57e764ad5e7ad94\" returns successfully" Mar 7 01:17:12.037425 containerd[1963]: time="2026-03-07T01:17:12.037303151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:17:12.281102 kubelet[3191]: I0307 01:17:12.279536 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-64c7867f4-fd5dm" podStartSLOduration=48.718084472 podStartE2EDuration="59.279507671s" podCreationTimestamp="2026-03-07 01:16:13 +0000 UTC" firstStartedPulling="2026-03-07 01:16:59.078731627 +0000 UTC m=+68.779797970" lastFinishedPulling="2026-03-07 01:17:09.640154829 +0000 UTC m=+79.341221169" observedRunningTime="2026-03-07 01:17:10.886656301 +0000 UTC m=+80.587722649" watchObservedRunningTime="2026-03-07 01:17:12.279507671 +0000 UTC m=+81.980574021" Mar 7 01:17:12.670879 sshd[6222]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:12.693585 systemd[1]: sshd@11-172.31.20.242:22-68.220.241.50:43878.service: Deactivated successfully. Mar 7 01:17:12.695419 systemd-logind[1957]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:17:12.703408 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:17:12.706966 systemd-logind[1957]: Removed session 12. Mar 7 01:17:12.762179 systemd[1]: Started sshd@12-172.31.20.242:22-68.220.241.50:47582.service - OpenSSH per-connection server daemon (68.220.241.50:47582). Mar 7 01:17:13.331389 sshd[6277]: Accepted publickey for core from 68.220.241.50 port 47582 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:13.333958 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:13.341038 systemd-logind[1957]: New session 13 of user core. Mar 7 01:17:13.346265 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:17:13.912433 sshd[6277]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:13.917742 systemd[1]: sshd@12-172.31.20.242:22-68.220.241.50:47582.service: Deactivated successfully. Mar 7 01:17:13.920852 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:17:13.921969 systemd-logind[1957]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:17:13.924006 systemd-logind[1957]: Removed session 13. Mar 7 01:17:14.919204 containerd[1963]: time="2026-03-07T01:17:14.919141027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:14.921686 containerd[1963]: time="2026-03-07T01:17:14.921602568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:17:14.924452 containerd[1963]: time="2026-03-07T01:17:14.924348062Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:14.932458 containerd[1963]: time="2026-03-07T01:17:14.932377923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:14.933737 containerd[1963]: time="2026-03-07T01:17:14.933467017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.896111053s" Mar 7 01:17:14.933737 containerd[1963]: time="2026-03-07T01:17:14.933611911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:17:15.192627 containerd[1963]: time="2026-03-07T01:17:15.191685493Z" level=info msg="CreateContainer within sandbox \"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:17:15.410467 containerd[1963]: time="2026-03-07T01:17:15.409756452Z" level=info msg="CreateContainer within sandbox \"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1bb85fc165458fc0d9d7840c3e618c41e459eead6a802a9d318b9c922538edc2\"" Mar 7 01:17:15.411061 containerd[1963]: time="2026-03-07T01:17:15.410959551Z" level=info msg="StartContainer for \"1bb85fc165458fc0d9d7840c3e618c41e459eead6a802a9d318b9c922538edc2\"" Mar 7 01:17:15.994273 systemd[1]: Started cri-containerd-1bb85fc165458fc0d9d7840c3e618c41e459eead6a802a9d318b9c922538edc2.scope - libcontainer container 1bb85fc165458fc0d9d7840c3e618c41e459eead6a802a9d318b9c922538edc2. Mar 7 01:17:16.091584 containerd[1963]: time="2026-03-07T01:17:16.091385060Z" level=info msg="StartContainer for \"1bb85fc165458fc0d9d7840c3e618c41e459eead6a802a9d318b9c922538edc2\" returns successfully" Mar 7 01:17:16.594177 update_engine[1958]: I20260307 01:17:16.593900 1958 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:17:16.594177 update_engine[1958]: I20260307 01:17:16.594022 1958 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:17:16.604063 update_engine[1958]: I20260307 01:17:16.604005 1958 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:17:16.606410 update_engine[1958]: I20260307 01:17:16.606349 1958 omaha_request_params.cc:62] Current group set to lts Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606559 1958 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606578 1958 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606604 1958 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606658 1958 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606747 1958 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606756 1958 omaha_request_action.cc:272] Request: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: Mar 7 01:17:16.607276 update_engine[1958]: I20260307 01:17:16.606762 1958 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:16.642688 locksmithd[1996]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:17:16.647086 update_engine[1958]: I20260307 01:17:16.647037 1958 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:16.647547 update_engine[1958]: I20260307 01:17:16.647409 1958 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:16.672942 update_engine[1958]: E20260307 01:17:16.672868 1958 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:16.673121 update_engine[1958]: I20260307 01:17:16.673044 1958 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:17:17.218141 kubelet[3191]: I0307 01:17:17.215049 3191 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:17:17.228794 kubelet[3191]: I0307 01:17:17.228728 3191 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:17:19.041416 systemd[1]: Started sshd@13-172.31.20.242:22-68.220.241.50:47594.service - OpenSSH per-connection server daemon (68.220.241.50:47594). Mar 7 01:17:19.642686 sshd[6396]: Accepted publickey for core from 68.220.241.50 port 47594 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:19.646741 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:19.655075 systemd-logind[1957]: New session 14 of user core. Mar 7 01:17:19.657127 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:17:20.930879 sshd[6396]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:20.936372 systemd[1]: sshd@13-172.31.20.242:22-68.220.241.50:47594.service: Deactivated successfully. Mar 7 01:17:20.939707 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:17:20.940920 systemd-logind[1957]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:17:20.942355 systemd-logind[1957]: Removed session 14. Mar 7 01:17:21.023498 systemd[1]: Started sshd@14-172.31.20.242:22-68.220.241.50:47596.service - OpenSSH per-connection server daemon (68.220.241.50:47596). Mar 7 01:17:21.543182 sshd[6416]: Accepted publickey for core from 68.220.241.50 port 47596 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:21.545183 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:21.555629 systemd-logind[1957]: New session 15 of user core. Mar 7 01:17:21.559207 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:17:22.492159 sshd[6416]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:22.498842 systemd[1]: sshd@14-172.31.20.242:22-68.220.241.50:47596.service: Deactivated successfully. Mar 7 01:17:22.502385 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:17:22.504300 systemd-logind[1957]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:17:22.506461 systemd-logind[1957]: Removed session 15. Mar 7 01:17:22.585440 systemd[1]: Started sshd@15-172.31.20.242:22-68.220.241.50:48902.service - OpenSSH per-connection server daemon (68.220.241.50:48902). Mar 7 01:17:23.138399 sshd[6427]: Accepted publickey for core from 68.220.241.50 port 48902 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:23.140272 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:23.146252 systemd-logind[1957]: New session 16 of user core. Mar 7 01:17:23.150247 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:17:24.374191 sshd[6427]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:24.389685 systemd[1]: sshd@15-172.31.20.242:22-68.220.241.50:48902.service: Deactivated successfully. Mar 7 01:17:24.399763 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:17:24.402928 systemd-logind[1957]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:17:24.405194 systemd-logind[1957]: Removed session 16. Mar 7 01:17:24.464420 systemd[1]: Started sshd@16-172.31.20.242:22-68.220.241.50:48910.service - OpenSSH per-connection server daemon (68.220.241.50:48910). Mar 7 01:17:25.014529 sshd[6453]: Accepted publickey for core from 68.220.241.50 port 48910 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:25.017881 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:25.025945 systemd-logind[1957]: New session 17 of user core. Mar 7 01:17:25.029318 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:17:26.493008 update_engine[1958]: I20260307 01:17:26.492044 1958 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:26.498459 update_engine[1958]: I20260307 01:17:26.498037 1958 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:26.498459 update_engine[1958]: I20260307 01:17:26.498396 1958 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:26.499325 update_engine[1958]: E20260307 01:17:26.498875 1958 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:26.499325 update_engine[1958]: I20260307 01:17:26.498944 1958 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:17:26.788213 sshd[6453]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:26.795480 systemd[1]: sshd@16-172.31.20.242:22-68.220.241.50:48910.service: Deactivated successfully. Mar 7 01:17:26.801531 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:17:26.804033 systemd-logind[1957]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:17:26.807872 systemd-logind[1957]: Removed session 17. Mar 7 01:17:26.884281 systemd[1]: Started sshd@17-172.31.20.242:22-68.220.241.50:48916.service - OpenSSH per-connection server daemon (68.220.241.50:48916). Mar 7 01:17:27.473651 sshd[6493]: Accepted publickey for core from 68.220.241.50 port 48916 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:27.475703 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:27.482047 systemd-logind[1957]: New session 18 of user core. Mar 7 01:17:27.492355 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:17:27.984294 sshd[6493]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:27.988701 systemd-logind[1957]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:17:27.989426 systemd[1]: sshd@17-172.31.20.242:22-68.220.241.50:48916.service: Deactivated successfully. Mar 7 01:17:27.992215 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:17:27.994070 systemd-logind[1957]: Removed session 18. Mar 7 01:17:30.378569 systemd[1]: run-containerd-runc-k8s.io-89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21-runc.LC3W92.mount: Deactivated successfully. Mar 7 01:17:33.077472 systemd[1]: Started sshd@18-172.31.20.242:22-68.220.241.50:42476.service - OpenSSH per-connection server daemon (68.220.241.50:42476). Mar 7 01:17:33.631031 sshd[6529]: Accepted publickey for core from 68.220.241.50 port 42476 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:33.635125 sshd[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:33.642809 systemd-logind[1957]: New session 19 of user core. Mar 7 01:17:33.648287 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:17:34.154510 sshd[6529]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:34.160958 systemd[1]: sshd@18-172.31.20.242:22-68.220.241.50:42476.service: Deactivated successfully. Mar 7 01:17:34.163471 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:17:34.164697 systemd-logind[1957]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:17:34.166845 systemd-logind[1957]: Removed session 19. Mar 7 01:17:36.493116 update_engine[1958]: I20260307 01:17:36.493034 1958 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:36.493634 update_engine[1958]: I20260307 01:17:36.493342 1958 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:36.493634 update_engine[1958]: I20260307 01:17:36.493610 1958 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:36.496670 update_engine[1958]: E20260307 01:17:36.496531 1958 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:36.496670 update_engine[1958]: I20260307 01:17:36.496636 1958 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:17:38.229651 kubelet[3191]: I0307 01:17:38.214366 3191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kdrcv" podStartSLOduration=68.37637526 podStartE2EDuration="1m24.181996529s" podCreationTimestamp="2026-03-07 01:16:14 +0000 UTC" firstStartedPulling="2026-03-07 01:16:59.12925792 +0000 UTC m=+68.830324263" lastFinishedPulling="2026-03-07 01:17:14.934879193 +0000 UTC m=+84.635945532" observedRunningTime="2026-03-07 01:17:17.202031926 +0000 UTC m=+86.903098275" watchObservedRunningTime="2026-03-07 01:17:38.181996529 +0000 UTC m=+107.883062874" Mar 7 01:17:39.248683 systemd[1]: Started sshd@19-172.31.20.242:22-68.220.241.50:42484.service - OpenSSH per-connection server daemon (68.220.241.50:42484). Mar 7 01:17:39.298401 kubelet[3191]: I0307 01:17:39.297752 3191 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:39.858820 sshd[6566]: Accepted publickey for core from 68.220.241.50 port 42484 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:39.861859 sshd[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:39.868565 systemd-logind[1957]: New session 20 of user core. Mar 7 01:17:39.872194 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:17:40.724245 sshd[6566]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:40.729064 systemd[1]: sshd@19-172.31.20.242:22-68.220.241.50:42484.service: Deactivated successfully. Mar 7 01:17:40.732736 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:17:40.733738 systemd-logind[1957]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:17:40.735606 systemd-logind[1957]: Removed session 20. Mar 7 01:17:45.839368 systemd[1]: Started sshd@20-172.31.20.242:22-68.220.241.50:54358.service - OpenSSH per-connection server daemon (68.220.241.50:54358). Mar 7 01:17:46.438739 sshd[6610]: Accepted publickey for core from 68.220.241.50 port 54358 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:46.441702 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:46.452667 systemd-logind[1957]: New session 21 of user core. Mar 7 01:17:46.456240 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:17:46.491448 update_engine[1958]: I20260307 01:17:46.491367 1958 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:46.491904 update_engine[1958]: I20260307 01:17:46.491677 1958 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:46.492129 update_engine[1958]: I20260307 01:17:46.491940 1958 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:46.492412 update_engine[1958]: E20260307 01:17:46.492378 1958 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:46.492494 update_engine[1958]: I20260307 01:17:46.492442 1958 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:17:46.492494 update_engine[1958]: I20260307 01:17:46.492456 1958 omaha_request_action.cc:617] Omaha request response: Mar 7 01:17:46.499177 update_engine[1958]: E20260307 01:17:46.498485 1958 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.572259 1958 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.578076 1958 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.578105 1958 update_attempter.cc:306] Processing Done. Mar 7 01:17:46.579018 update_engine[1958]: E20260307 01:17:46.578557 1958 update_attempter.cc:619] Update failed. Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.578583 1958 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.578593 1958 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.578605 1958 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:17:46.579018 update_engine[1958]: I20260307 01:17:46.578763 1958 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:17:46.581529 update_engine[1958]: I20260307 01:17:46.580912 1958 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:17:46.582324 update_engine[1958]: I20260307 01:17:46.581699 1958 omaha_request_action.cc:272] Request: Mar 7 01:17:46.582324 update_engine[1958]: Mar 7 01:17:46.582324 update_engine[1958]: Mar 7 01:17:46.582324 update_engine[1958]: Mar 7 01:17:46.582324 update_engine[1958]: Mar 7 01:17:46.582324 update_engine[1958]: Mar 7 01:17:46.582324 update_engine[1958]: Mar 7 01:17:46.582324 update_engine[1958]: I20260307 01:17:46.581722 1958 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:46.582324 update_engine[1958]: I20260307 01:17:46.581949 1958 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:46.582324 update_engine[1958]: I20260307 01:17:46.582245 1958 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:46.585496 update_engine[1958]: E20260307 01:17:46.583232 1958 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584564 1958 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584591 1958 omaha_request_action.cc:617] Omaha request response: Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584602 1958 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584610 1958 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584618 1958 update_attempter.cc:306] Processing Done. Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584628 1958 update_attempter.cc:310] Error event sent. Mar 7 01:17:46.585496 update_engine[1958]: I20260307 01:17:46.584653 1958 update_check_scheduler.cc:74] Next update check in 47m47s Mar 7 01:17:46.610414 locksmithd[1996]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:17:46.610414 locksmithd[1996]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:17:47.519054 sshd[6610]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:47.525141 systemd[1]: sshd@20-172.31.20.242:22-68.220.241.50:54358.service: Deactivated successfully. Mar 7 01:17:47.528916 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:17:47.530038 systemd-logind[1957]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:17:47.531847 systemd-logind[1957]: Removed session 21. Mar 7 01:17:51.300214 containerd[1963]: time="2026-03-07T01:17:51.269720154Z" level=info msg="StopPodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\"" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:51.889 [WARNING][6640] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5", ResourceVersion:"1396", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921", Pod:"calico-apiserver-64c7867f4-xrvts", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali309b23fb9a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:51.895 [INFO][6640] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:51.895 [INFO][6640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" iface="eth0" netns="" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:51.895 [INFO][6640] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:51.895 [INFO][6640] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.384 [INFO][6647] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.389 [INFO][6647] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.391 [INFO][6647] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.408 [WARNING][6647] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.409 [INFO][6647] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.412 [INFO][6647] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:52.426704 containerd[1963]: 2026-03-07 01:17:52.421 [INFO][6640] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.432174 containerd[1963]: time="2026-03-07T01:17:52.427029713Z" level=info msg="TearDown network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" successfully" Mar 7 01:17:52.432174 containerd[1963]: time="2026-03-07T01:17:52.427070401Z" level=info msg="StopPodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" returns successfully" Mar 7 01:17:52.450564 containerd[1963]: time="2026-03-07T01:17:52.450439047Z" level=info msg="RemovePodSandbox for \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\"" Mar 7 01:17:52.482451 containerd[1963]: time="2026-03-07T01:17:52.482009968Z" level=info msg="Forcibly stopping sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\"" Mar 7 01:17:52.625017 systemd[1]: Started sshd@21-172.31.20.242:22-68.220.241.50:45294.service - OpenSSH per-connection server daemon (68.220.241.50:45294). Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.712 [WARNING][6663] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c16a83f1-f0b9-4cb8-9fd4-5f908f7fd2c5", ResourceVersion:"1396", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"7daf763ddb0af10ef196aada98f31d1e2882a7be667f08bbe6c24a24b6941921", Pod:"calico-apiserver-64c7867f4-xrvts", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali309b23fb9a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.712 [INFO][6663] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.712 [INFO][6663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" iface="eth0" netns="" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.712 [INFO][6663] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.712 [INFO][6663] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.779 [INFO][6671] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.779 [INFO][6671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.779 [INFO][6671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.787 [WARNING][6671] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.787 [INFO][6671] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" HandleID="k8s-pod-network.7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--xrvts-eth0" Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.789 [INFO][6671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:52.800619 containerd[1963]: 2026-03-07 01:17:52.795 [INFO][6663] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc" Mar 7 01:17:52.802183 containerd[1963]: time="2026-03-07T01:17:52.800626729Z" level=info msg="TearDown network for sandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" successfully" Mar 7 01:17:52.919780 containerd[1963]: time="2026-03-07T01:17:52.919693569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:52.920070 containerd[1963]: time="2026-03-07T01:17:52.919818937Z" level=info msg="RemovePodSandbox \"7cb5ffb2acc40b04e5c58c38077c1eeb339ea6a0381e3b9610616521449462bc\" returns successfully" Mar 7 01:17:52.921874 containerd[1963]: time="2026-03-07T01:17:52.921466234Z" level=info msg="StopPodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\"" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:52.975 [WARNING][6686] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837", Pod:"csi-node-driver-kdrcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid98f3a30368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:52.975 [INFO][6686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:52.975 [INFO][6686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" iface="eth0" netns="" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:52.975 [INFO][6686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:52.976 [INFO][6686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.007 [INFO][6693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.007 [INFO][6693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.007 [INFO][6693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.015 [WARNING][6693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.016 [INFO][6693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.018 [INFO][6693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:53.024101 containerd[1963]: 2026-03-07 01:17:53.021 [INFO][6686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.026064 containerd[1963]: time="2026-03-07T01:17:53.024159533Z" level=info msg="TearDown network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" successfully" Mar 7 01:17:53.026064 containerd[1963]: time="2026-03-07T01:17:53.024192303Z" level=info msg="StopPodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" returns successfully" Mar 7 01:17:53.026064 containerd[1963]: time="2026-03-07T01:17:53.025190513Z" level=info msg="RemovePodSandbox for \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\"" Mar 7 01:17:53.026064 containerd[1963]: time="2026-03-07T01:17:53.025240107Z" level=info msg="Forcibly stopping sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\"" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.071 [WARNING][6707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9aed3645-1ca9-4273-9dbb-5a5fa746e5c3", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"c3385c99c44febf3ffc426fd1ea7878299c3b5c3cab2f682cdf1da8809456837", Pod:"csi-node-driver-kdrcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid98f3a30368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.072 [INFO][6707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.072 [INFO][6707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" iface="eth0" netns="" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.072 [INFO][6707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.072 [INFO][6707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.101 [INFO][6715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.101 [INFO][6715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.101 [INFO][6715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.113 [WARNING][6715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.113 [INFO][6715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" HandleID="k8s-pod-network.c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Workload="ip--172--31--20--242-k8s-csi--node--driver--kdrcv-eth0" Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.116 [INFO][6715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:53.121719 containerd[1963]: 2026-03-07 01:17:53.118 [INFO][6707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2" Mar 7 01:17:53.124450 containerd[1963]: time="2026-03-07T01:17:53.121681818Z" level=info msg="TearDown network for sandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" successfully" Mar 7 01:17:53.137615 containerd[1963]: time="2026-03-07T01:17:53.137540388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:53.138620 containerd[1963]: time="2026-03-07T01:17:53.137639690Z" level=info msg="RemovePodSandbox \"c4579e94e69bc4dc5cdf7e360a914e2e9b64dfa9f8f8c25204c9212989b47cb2\" returns successfully" Mar 7 01:17:53.138620 containerd[1963]: time="2026-03-07T01:17:53.138213742Z" level=info msg="StopPodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\"" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.181 [WARNING][6729] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c384657e-2841-43d4-89d4-ff693e5014b6", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad", Pod:"calico-apiserver-64c7867f4-fd5dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03104dcc0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.181 [INFO][6729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.181 [INFO][6729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" iface="eth0" netns="" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.181 [INFO][6729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.181 [INFO][6729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.213 [INFO][6736] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.213 [INFO][6736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.213 [INFO][6736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.222 [WARNING][6736] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.222 [INFO][6736] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.224 [INFO][6736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:53.229758 containerd[1963]: 2026-03-07 01:17:53.227 [INFO][6729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.233212 containerd[1963]: time="2026-03-07T01:17:53.229825875Z" level=info msg="TearDown network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" successfully" Mar 7 01:17:53.233212 containerd[1963]: time="2026-03-07T01:17:53.229910615Z" level=info msg="StopPodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" returns successfully" Mar 7 01:17:53.233212 containerd[1963]: time="2026-03-07T01:17:53.230868482Z" level=info msg="RemovePodSandbox for \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\"" Mar 7 01:17:53.233212 containerd[1963]: time="2026-03-07T01:17:53.230908266Z" level=info msg="Forcibly stopping sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\"" Mar 7 01:17:53.291885 sshd[6659]: Accepted publickey for core from 68.220.241.50 port 45294 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:17:53.298344 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:53.307841 systemd-logind[1957]: New session 22 of user core. Mar 7 01:17:53.311218 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.297 [WARNING][6751] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0", GenerateName:"calico-apiserver-64c7867f4-", Namespace:"calico-system", SelfLink:"", UID:"c384657e-2841-43d4-89d4-ff693e5014b6", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c7867f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"6b1861f11e99fd46f4b64b8430e5c5d4756517301af2f8812a66c79840e63bad", Pod:"calico-apiserver-64c7867f4-fd5dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03104dcc0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.297 [INFO][6751] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.297 [INFO][6751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" iface="eth0" netns="" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.298 [INFO][6751] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.298 [INFO][6751] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.341 [INFO][6758] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.342 [INFO][6758] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.342 [INFO][6758] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.350 [WARNING][6758] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.350 [INFO][6758] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" HandleID="k8s-pod-network.00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Workload="ip--172--31--20--242-k8s-calico--apiserver--64c7867f4--fd5dm-eth0" Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.352 [INFO][6758] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:53.358020 containerd[1963]: 2026-03-07 01:17:53.355 [INFO][6751] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9" Mar 7 01:17:53.359492 containerd[1963]: time="2026-03-07T01:17:53.358058828Z" level=info msg="TearDown network for sandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" successfully" Mar 7 01:17:53.528461 containerd[1963]: time="2026-03-07T01:17:53.528233627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:53.528461 containerd[1963]: time="2026-03-07T01:17:53.528334080Z" level=info msg="RemovePodSandbox \"00763d320fccbddef45dd6041df806fd9afd1f104c908140084d045c1c56a8e9\" returns successfully" Mar 7 01:17:53.529910 containerd[1963]: time="2026-03-07T01:17:53.529867384Z" level=info msg="StopPodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\"" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.604 [WARNING][6773] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f74be19c-3e5a-4ec1-971f-5534f5ca8d72", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f", Pod:"coredns-674b8bbfcf-hx6bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid39b80d6526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.605 [INFO][6773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.605 [INFO][6773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" iface="eth0" netns="" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.605 [INFO][6773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.605 [INFO][6773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.671 [INFO][6783] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.671 [INFO][6783] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.671 [INFO][6783] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.682 [WARNING][6783] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.682 [INFO][6783] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.685 [INFO][6783] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:53.695677 containerd[1963]: 2026-03-07 01:17:53.692 [INFO][6773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.695677 containerd[1963]: time="2026-03-07T01:17:53.695606575Z" level=info msg="TearDown network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" successfully" Mar 7 01:17:53.695677 containerd[1963]: time="2026-03-07T01:17:53.695656874Z" level=info msg="StopPodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" returns successfully" Mar 7 01:17:53.697532 containerd[1963]: time="2026-03-07T01:17:53.696431031Z" level=info msg="RemovePodSandbox for \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\"" Mar 7 01:17:53.697532 containerd[1963]: time="2026-03-07T01:17:53.696466728Z" level=info msg="Forcibly stopping sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\"" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.762 [WARNING][6797] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f74be19c-3e5a-4ec1-971f-5534f5ca8d72", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"9227ea925ad2aa51b2c7d8dd57bf9e228100c0babcca0725af326c50d8b0491f", Pod:"coredns-674b8bbfcf-hx6bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid39b80d6526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.763 [INFO][6797] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.763 [INFO][6797] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" iface="eth0" netns="" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.763 [INFO][6797] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.763 [INFO][6797] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.801 [INFO][6805] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.802 [INFO][6805] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.802 [INFO][6805] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.813 [WARNING][6805] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.813 [INFO][6805] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" HandleID="k8s-pod-network.edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--hx6bb-eth0" Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.816 [INFO][6805] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:53.825209 containerd[1963]: 2026-03-07 01:17:53.820 [INFO][6797] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f" Mar 7 01:17:53.828035 containerd[1963]: time="2026-03-07T01:17:53.826069055Z" level=info msg="TearDown network for sandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" successfully" Mar 7 01:17:53.837620 containerd[1963]: time="2026-03-07T01:17:53.837574692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:53.838011 containerd[1963]: time="2026-03-07T01:17:53.837969596Z" level=info msg="RemovePodSandbox \"edc8aa5cafbdd050737a6a04d390ef84014422f449e160476789b6b6ece7373f\" returns successfully" Mar 7 01:17:53.839730 containerd[1963]: time="2026-03-07T01:17:53.839703847Z" level=info msg="StopPodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\"" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.936 [WARNING][6819] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0", GenerateName:"calico-kube-controllers-b98f9b6fc-", Namespace:"calico-system", SelfLink:"", UID:"f8ecbf7f-4830-437f-b292-9cd1d51ae57e", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b98f9b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e", Pod:"calico-kube-controllers-b98f9b6fc-npvst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b4c16be479", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.937 [INFO][6819] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.937 [INFO][6819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" iface="eth0" netns="" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.937 [INFO][6819] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.937 [INFO][6819] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.981 [INFO][6829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.981 [INFO][6829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.981 [INFO][6829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.989 [WARNING][6829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.989 [INFO][6829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.991 [INFO][6829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:54.001134 containerd[1963]: 2026-03-07 01:17:53.998 [INFO][6819] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.004432 containerd[1963]: time="2026-03-07T01:17:54.002332568Z" level=info msg="TearDown network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" successfully" Mar 7 01:17:54.004432 containerd[1963]: time="2026-03-07T01:17:54.002361503Z" level=info msg="StopPodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" returns successfully" Mar 7 01:17:54.004432 containerd[1963]: time="2026-03-07T01:17:54.003535424Z" level=info msg="RemovePodSandbox for \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\"" Mar 7 01:17:54.004432 containerd[1963]: time="2026-03-07T01:17:54.003564723Z" level=info msg="Forcibly stopping sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\"" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.075 [WARNING][6843] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0", GenerateName:"calico-kube-controllers-b98f9b6fc-", Namespace:"calico-system", SelfLink:"", UID:"f8ecbf7f-4830-437f-b292-9cd1d51ae57e", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b98f9b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"86d6c77346cc857c8437d1808f1c13ce232857bc98e288a50cad35a7a5c1b81e", Pod:"calico-kube-controllers-b98f9b6fc-npvst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b4c16be479", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.075 [INFO][6843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.075 [INFO][6843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" iface="eth0" netns="" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.075 [INFO][6843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.075 [INFO][6843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.131 [INFO][6850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.131 [INFO][6850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.131 [INFO][6850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.142 [WARNING][6850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.142 [INFO][6850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" HandleID="k8s-pod-network.dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Workload="ip--172--31--20--242-k8s-calico--kube--controllers--b98f9b6fc--npvst-eth0" Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.144 [INFO][6850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:54.150231 containerd[1963]: 2026-03-07 01:17:54.147 [INFO][6843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e" Mar 7 01:17:54.152644 containerd[1963]: time="2026-03-07T01:17:54.150284033Z" level=info msg="TearDown network for sandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" successfully" Mar 7 01:17:54.159480 containerd[1963]: time="2026-03-07T01:17:54.159222129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:54.159480 containerd[1963]: time="2026-03-07T01:17:54.159322574Z" level=info msg="RemovePodSandbox \"dc0a27c04ce88305331d845eb2c44a92682c945bd283f4fc0eb61a3f8ccfd04e\" returns successfully" Mar 7 01:17:54.160168 containerd[1963]: time="2026-03-07T01:17:54.160124506Z" level=info msg="StopPodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\"" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.220 [WARNING][6865] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e240c855-c8b1-420d-93db-8b8e45e00b2c", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b", Pod:"coredns-674b8bbfcf-lld45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c06797789c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.221 [INFO][6865] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.221 [INFO][6865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" iface="eth0" netns="" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.221 [INFO][6865] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.221 [INFO][6865] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.262 [INFO][6872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.263 [INFO][6872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.263 [INFO][6872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.271 [WARNING][6872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.272 [INFO][6872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.274 [INFO][6872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:54.280252 containerd[1963]: 2026-03-07 01:17:54.276 [INFO][6865] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.280252 containerd[1963]: time="2026-03-07T01:17:54.280154568Z" level=info msg="TearDown network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" successfully" Mar 7 01:17:54.280252 containerd[1963]: time="2026-03-07T01:17:54.280186751Z" level=info msg="StopPodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" returns successfully" Mar 7 01:17:54.281873 containerd[1963]: time="2026-03-07T01:17:54.280791685Z" level=info msg="RemovePodSandbox for \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\"" Mar 7 01:17:54.281873 containerd[1963]: time="2026-03-07T01:17:54.280825725Z" level=info msg="Forcibly stopping sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\"" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.340 [WARNING][6887] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e240c855-c8b1-420d-93db-8b8e45e00b2c", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"71b1c1fe7598a84ed4fc147155a51ee7afb9384f791175db7c0269032aea638b", Pod:"coredns-674b8bbfcf-lld45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c06797789c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.341 [INFO][6887] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.341 [INFO][6887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" iface="eth0" netns="" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.341 [INFO][6887] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.341 [INFO][6887] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.373 [INFO][6894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.373 [INFO][6894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.373 [INFO][6894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.382 [WARNING][6894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.382 [INFO][6894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" HandleID="k8s-pod-network.3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Workload="ip--172--31--20--242-k8s-coredns--674b8bbfcf--lld45-eth0" Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.384 [INFO][6894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:54.390159 containerd[1963]: 2026-03-07 01:17:54.387 [INFO][6887] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532" Mar 7 01:17:54.392377 containerd[1963]: time="2026-03-07T01:17:54.390507739Z" level=info msg="TearDown network for sandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" successfully" Mar 7 01:17:54.399570 containerd[1963]: time="2026-03-07T01:17:54.399481931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:54.399745 containerd[1963]: time="2026-03-07T01:17:54.399599323Z" level=info msg="RemovePodSandbox \"3b01770643ab009a4a00122f9e02a8165bcb5964df0a1499a1af814618226532\" returns successfully" Mar 7 01:17:54.400945 containerd[1963]: time="2026-03-07T01:17:54.400832077Z" level=info msg="StopPodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\"" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.458 [WARNING][6909] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d317c1e8-3726-4466-b652-a1bd0a0fc939", ResourceVersion:"1388", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e", Pod:"goldmane-5b85766d88-9r4d8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9fe9696ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.458 [INFO][6909] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.458 [INFO][6909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" iface="eth0" netns="" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.458 [INFO][6909] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.459 [INFO][6909] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.501 [INFO][6917] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.501 [INFO][6917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.501 [INFO][6917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.512 [WARNING][6917] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.512 [INFO][6917] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.518 [INFO][6917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:54.533361 containerd[1963]: 2026-03-07 01:17:54.527 [INFO][6909] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.533361 containerd[1963]: time="2026-03-07T01:17:54.533183749Z" level=info msg="TearDown network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" successfully" Mar 7 01:17:54.533361 containerd[1963]: time="2026-03-07T01:17:54.533242636Z" level=info msg="StopPodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" returns successfully" Mar 7 01:17:54.534589 containerd[1963]: time="2026-03-07T01:17:54.534074083Z" level=info msg="RemovePodSandbox for \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\"" Mar 7 01:17:54.534589 containerd[1963]: time="2026-03-07T01:17:54.534110565Z" level=info msg="Forcibly stopping sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\"" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.589 [WARNING][6931] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d317c1e8-3726-4466-b652-a1bd0a0fc939", ResourceVersion:"1388", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-242", ContainerID:"09b646c205deab23ebff6a051a1dbb0ef7701dd3ee8e58e2e621d08384142b4e", Pod:"goldmane-5b85766d88-9r4d8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9fe9696ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.589 [INFO][6931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.589 [INFO][6931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" iface="eth0" netns="" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.589 [INFO][6931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.589 [INFO][6931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.624 [INFO][6938] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.624 [INFO][6938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.624 [INFO][6938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.632 [WARNING][6938] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.632 [INFO][6938] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" HandleID="k8s-pod-network.0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Workload="ip--172--31--20--242-k8s-goldmane--5b85766d88--9r4d8-eth0" Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.635 [INFO][6938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:54.643660 containerd[1963]: 2026-03-07 01:17:54.641 [INFO][6931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec" Mar 7 01:17:54.647131 containerd[1963]: time="2026-03-07T01:17:54.643733791Z" level=info msg="TearDown network for sandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" successfully" Mar 7 01:17:54.654369 containerd[1963]: time="2026-03-07T01:17:54.653790512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:54.654369 containerd[1963]: time="2026-03-07T01:17:54.653889354Z" level=info msg="RemovePodSandbox \"0f81ab6f85604e3971cc300f50cb412f83700052b1a6c76c26e750cc629dccec\" returns successfully" Mar 7 01:17:55.034673 sshd[6659]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:55.039112 systemd[1]: sshd@21-172.31.20.242:22-68.220.241.50:45294.service: Deactivated successfully. Mar 7 01:17:55.041859 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:17:55.044969 systemd-logind[1957]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:17:55.046919 systemd-logind[1957]: Removed session 22. Mar 7 01:18:00.815854 systemd[1]: run-containerd-runc-k8s.io-89e405486d2c75225242e66da92b9195f76a9379e970f7af1a40aa78ab704e21-runc.AF9w3b.mount: Deactivated successfully. Mar 7 01:18:07.691137 systemd[1]: run-containerd-runc-k8s.io-006b6d650d0a339326050cb67bfcb838b86ab598f7c487ae8fc9ce9cc8a07284-runc.3DwUT4.mount: Deactivated successfully. Mar 7 01:18:08.770663 systemd[1]: cri-containerd-8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e.scope: Deactivated successfully. Mar 7 01:18:08.771134 systemd[1]: cri-containerd-8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e.scope: Consumed 9.367s CPU time. Mar 7 01:18:08.938849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e-rootfs.mount: Deactivated successfully. Mar 7 01:18:08.964290 containerd[1963]: time="2026-03-07T01:18:08.943299871Z" level=info msg="shim disconnected" id=8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e namespace=k8s.io Mar 7 01:18:08.964775 containerd[1963]: time="2026-03-07T01:18:08.964294051Z" level=warning msg="cleaning up after shim disconnected" id=8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e namespace=k8s.io Mar 7 01:18:08.964775 containerd[1963]: time="2026-03-07T01:18:08.964314352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:18:09.732751 systemd[1]: cri-containerd-2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0.scope: Deactivated successfully. Mar 7 01:18:09.733433 systemd[1]: cri-containerd-2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0.scope: Consumed 5.566s CPU time, 21.0M memory peak, 0B memory swap peak. Mar 7 01:18:09.765438 containerd[1963]: time="2026-03-07T01:18:09.765374770Z" level=info msg="shim disconnected" id=2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0 namespace=k8s.io Mar 7 01:18:09.765811 containerd[1963]: time="2026-03-07T01:18:09.765629903Z" level=warning msg="cleaning up after shim disconnected" id=2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0 namespace=k8s.io Mar 7 01:18:09.765811 containerd[1963]: time="2026-03-07T01:18:09.765654213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:18:09.769673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0-rootfs.mount: Deactivated successfully. Mar 7 01:18:10.108735 kubelet[3191]: I0307 01:18:10.108557 3191 scope.go:117] "RemoveContainer" containerID="8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e" Mar 7 01:18:10.114233 kubelet[3191]: I0307 01:18:10.114184 3191 scope.go:117] "RemoveContainer" containerID="2cf29b3e6a0307756d3969974fdaab6133870904ffa3243c758ffbe6f1d417f0" Mar 7 01:18:10.286591 containerd[1963]: time="2026-03-07T01:18:10.286426356Z" level=info msg="CreateContainer within sandbox \"7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:18:10.288585 containerd[1963]: time="2026-03-07T01:18:10.286745096Z" level=info msg="CreateContainer within sandbox \"da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 01:18:10.464606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714053742.mount: Deactivated successfully. Mar 7 01:18:10.472846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155115181.mount: Deactivated successfully. Mar 7 01:18:10.496496 containerd[1963]: time="2026-03-07T01:18:10.495208391Z" level=info msg="CreateContainer within sandbox \"da1b56867049786715da5964fcceec68cd082835468a7a1312a2b0dd54216a5b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb\"" Mar 7 01:18:10.510780 containerd[1963]: time="2026-03-07T01:18:10.510215914Z" level=info msg="CreateContainer within sandbox \"7d8ca68c1ca90ec03d7f16c286dbe1681a458bf3f10cbd2e4fa1a4e2a8de8994\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b92a035642bdb63dcd40ff9f01c75867a67904384c9984d0314044d08db22228\"" Mar 7 01:18:10.533294 containerd[1963]: time="2026-03-07T01:18:10.533249238Z" level=info msg="StartContainer for \"fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb\"" Mar 7 01:18:10.534140 containerd[1963]: time="2026-03-07T01:18:10.534104258Z" level=info msg="StartContainer for \"b92a035642bdb63dcd40ff9f01c75867a67904384c9984d0314044d08db22228\"" Mar 7 01:18:10.625958 systemd[1]: Started cri-containerd-fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb.scope - libcontainer container fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb. Mar 7 01:18:10.652842 systemd[1]: Started cri-containerd-b92a035642bdb63dcd40ff9f01c75867a67904384c9984d0314044d08db22228.scope - libcontainer container b92a035642bdb63dcd40ff9f01c75867a67904384c9984d0314044d08db22228. Mar 7 01:18:10.755643 containerd[1963]: time="2026-03-07T01:18:10.754550916Z" level=info msg="StartContainer for \"fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb\" returns successfully" Mar 7 01:18:10.781756 containerd[1963]: time="2026-03-07T01:18:10.781627431Z" level=info msg="StartContainer for \"b92a035642bdb63dcd40ff9f01c75867a67904384c9984d0314044d08db22228\" returns successfully" Mar 7 01:18:14.074235 systemd[1]: run-containerd-runc-k8s.io-a4199d562b530fe4cb1c762d9f840483b96521dcfed96d16f82467d1d22212bd-runc.VctPS4.mount: Deactivated successfully. Mar 7 01:18:14.197180 kubelet[3191]: E0307 01:18:14.197103 3191 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-242?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 7 01:18:14.874090 systemd[1]: cri-containerd-e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac.scope: Deactivated successfully. Mar 7 01:18:14.874510 systemd[1]: cri-containerd-e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac.scope: Consumed 3.266s CPU time, 13.9M memory peak, 0B memory swap peak. Mar 7 01:18:14.908557 containerd[1963]: time="2026-03-07T01:18:14.906683003Z" level=info msg="shim disconnected" id=e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac namespace=k8s.io Mar 7 01:18:14.908557 containerd[1963]: time="2026-03-07T01:18:14.906825072Z" level=warning msg="cleaning up after shim disconnected" id=e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac namespace=k8s.io Mar 7 01:18:14.908557 containerd[1963]: time="2026-03-07T01:18:14.906841186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:18:14.908564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac-rootfs.mount: Deactivated successfully. Mar 7 01:18:15.087760 kubelet[3191]: I0307 01:18:15.087720 3191 scope.go:117] "RemoveContainer" containerID="e08a7a7c87b8a0e0a0afe7293fe5151865eadcf17e4adfccf3cf09765c1f8cac" Mar 7 01:18:15.091005 containerd[1963]: time="2026-03-07T01:18:15.090901680Z" level=info msg="CreateContainer within sandbox \"873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:18:15.115759 containerd[1963]: time="2026-03-07T01:18:15.115702278Z" level=info msg="CreateContainer within sandbox \"873da745a2e803b9c61d1333f3aa2e40a89bf8ebff7fbd9806163d0ade3ee2c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"dbe3dd16980de20ebe3b447f310a1c1fa1f16270656ed06dceed972ec15dd9ed\"" Mar 7 01:18:15.117552 containerd[1963]: time="2026-03-07T01:18:15.117504309Z" level=info msg="StartContainer for \"dbe3dd16980de20ebe3b447f310a1c1fa1f16270656ed06dceed972ec15dd9ed\"" Mar 7 01:18:15.175083 systemd[1]: Started cri-containerd-dbe3dd16980de20ebe3b447f310a1c1fa1f16270656ed06dceed972ec15dd9ed.scope - libcontainer container dbe3dd16980de20ebe3b447f310a1c1fa1f16270656ed06dceed972ec15dd9ed. Mar 7 01:18:15.235359 containerd[1963]: time="2026-03-07T01:18:15.235292978Z" level=info msg="StartContainer for \"dbe3dd16980de20ebe3b447f310a1c1fa1f16270656ed06dceed972ec15dd9ed\" returns successfully" Mar 7 01:18:22.660687 systemd[1]: cri-containerd-fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb.scope: Deactivated successfully. Mar 7 01:18:22.691819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb-rootfs.mount: Deactivated successfully. Mar 7 01:18:22.713718 containerd[1963]: time="2026-03-07T01:18:22.713603434Z" level=info msg="shim disconnected" id=fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb namespace=k8s.io Mar 7 01:18:22.713718 containerd[1963]: time="2026-03-07T01:18:22.713716286Z" level=warning msg="cleaning up after shim disconnected" id=fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb namespace=k8s.io Mar 7 01:18:22.714521 containerd[1963]: time="2026-03-07T01:18:22.713728683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:18:23.163398 kubelet[3191]: I0307 01:18:23.163347 3191 scope.go:117] "RemoveContainer" containerID="8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e" Mar 7 01:18:23.163967 kubelet[3191]: I0307 01:18:23.163523 3191 scope.go:117] "RemoveContainer" containerID="fd552caa308a9e25057955da64f2e35302aa10340834a19348a51e792cad62cb" Mar 7 01:18:23.168620 kubelet[3191]: E0307 01:18:23.168562 3191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-5gtrt_tigera-operator(03e453cc-32a7-48d0-87b3-cad4d2a0dd5e)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-5gtrt" podUID="03e453cc-32a7-48d0-87b3-cad4d2a0dd5e" Mar 7 01:18:23.263058 containerd[1963]: time="2026-03-07T01:18:23.263008886Z" level=info msg="RemoveContainer for \"8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e\"" Mar 7 01:18:23.281209 containerd[1963]: time="2026-03-07T01:18:23.281148046Z" level=info msg="RemoveContainer for \"8bd6f70147ab6651ba944d0eb7d9513a1bb293079116560e980f1f5f16fb453e\" returns successfully" Mar 7 01:18:24.212989 kubelet[3191]: E0307 01:18:24.212933 3191 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-242?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"