Mar 13 00:40:21.878552 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:40:21.878589 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:21.878608 kernel: BIOS-provided physical RAM map: Mar 13 00:40:21.878619 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:40:21.878630 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 13 00:40:21.878641 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 13 00:40:21.878655 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 13 00:40:21.880848 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 13 00:40:21.880888 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 13 00:40:21.880900 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 13 00:40:21.880912 kernel: NX (Execute Disable) protection: active Mar 13 00:40:21.880931 kernel: APIC: Static calls initialized Mar 13 00:40:21.880944 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 13 00:40:21.880957 kernel: extended physical RAM map: Mar 13 00:40:21.880971 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:40:21.880984 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 13 00:40:21.880999 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 13 00:40:21.881012 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 13 00:40:21.881025 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 13 00:40:21.881037 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 13 00:40:21.881051 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 13 00:40:21.881063 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 13 00:40:21.881075 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 13 00:40:21.881087 kernel: efi: EFI v2.7 by EDK II Mar 13 00:40:21.881101 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 13 00:40:21.881115 kernel: secureboot: Secure boot disabled Mar 13 00:40:21.881128 kernel: SMBIOS 2.7 present. Mar 13 00:40:21.881145 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 13 00:40:21.881159 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:40:21.881172 kernel: Hypervisor detected: KVM Mar 13 00:40:21.881185 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 13 00:40:21.881199 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:40:21.881213 kernel: kvm-clock: using sched offset of 5359939183 cycles Mar 13 00:40:21.881228 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:40:21.881243 kernel: tsc: Detected 2499.996 MHz processor Mar 13 00:40:21.881257 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:40:21.881271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:40:21.881289 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 13 00:40:21.881302 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:40:21.881317 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:40:21.881337 kernel: Using GB pages for direct mapping Mar 13 00:40:21.881352 kernel: ACPI: Early table checksum verification disabled Mar 13 00:40:21.881366 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 13 00:40:21.881382 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 13 00:40:21.881401 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 13 00:40:21.881416 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 13 00:40:21.881431 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 13 00:40:21.881445 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 13 00:40:21.881460 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 13 00:40:21.881474 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 13 00:40:21.881489 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 13 00:40:21.881504 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 13 00:40:21.881522 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 13 00:40:21.881537 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 13 00:40:21.881552 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 13 00:40:21.881565 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 13 00:40:21.881580 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 13 00:40:21.881595 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 13 00:40:21.881609 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 13 00:40:21.881624 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 13 00:40:21.881642 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 13 00:40:21.881657 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 13 00:40:21.881704 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 13 00:40:21.881717 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 13 00:40:21.881729 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 13 00:40:21.881741 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 13 00:40:21.881752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 13 00:40:21.881764 kernel: NUMA: Initialized distance table, cnt=1 Mar 13 00:40:21.881778 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Mar 13 00:40:21.881792 kernel: Zone ranges: Mar 13 00:40:21.881809 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:40:21.881820 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 13 00:40:21.881832 kernel: Normal empty Mar 13 00:40:21.881844 kernel: Device empty Mar 13 00:40:21.881858 kernel: Movable zone start for each node Mar 13 00:40:21.881871 kernel: Early memory node ranges Mar 13 00:40:21.881886 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:40:21.881899 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 13 00:40:21.881913 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 13 00:40:21.881940 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 13 00:40:21.881954 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:40:21.881985 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:40:21.881999 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 13 00:40:21.882013 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 13 00:40:21.882028 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 13 00:40:21.882042 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:40:21.882056 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 13 00:40:21.882071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:40:21.882088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:40:21.882102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:40:21.882115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:40:21.882130 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:40:21.882144 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:40:21.882158 kernel: TSC deadline timer available Mar 13 00:40:21.882172 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:40:21.882186 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:40:21.882200 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:40:21.882214 kernel: CPU topo: Max. threads per core: 2 Mar 13 00:40:21.882231 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:40:21.882245 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:40:21.882260 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:40:21.882274 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:40:21.882288 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 13 00:40:21.882302 kernel: Booting paravirtualized kernel on KVM Mar 13 00:40:21.882317 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:40:21.882331 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:40:21.882346 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:40:21.882364 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:40:21.882378 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:40:21.882392 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:40:21.882406 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:40:21.882424 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:21.882439 kernel: random: crng init done Mar 13 00:40:21.882453 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:40:21.882468 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 00:40:21.882485 kernel: Fallback order for Node 0: 0 Mar 13 00:40:21.882499 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Mar 13 00:40:21.882514 kernel: Policy zone: DMA32 Mar 13 00:40:21.882539 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:40:21.882557 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:40:21.882573 kernel: Kernel/User page tables isolation: enabled Mar 13 00:40:21.882588 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:40:21.882603 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:40:21.882618 kernel: Dynamic Preempt: voluntary Mar 13 00:40:21.882633 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:40:21.882649 kernel: rcu: RCU event tracing is enabled. Mar 13 00:40:21.882664 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:40:21.885030 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:40:21.885047 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:40:21.885062 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:40:21.885076 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:40:21.885089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:40:21.885107 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:40:21.885121 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:40:21.885135 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:40:21.885149 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:40:21.885162 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:40:21.885175 kernel: Console: colour dummy device 80x25 Mar 13 00:40:21.885189 kernel: printk: legacy console [tty0] enabled Mar 13 00:40:21.885203 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:40:21.885217 kernel: ACPI: Core revision 20240827 Mar 13 00:40:21.885233 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 13 00:40:21.885247 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:40:21.885260 kernel: x2apic enabled Mar 13 00:40:21.885274 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:40:21.885287 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 13 00:40:21.885301 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 13 00:40:21.885315 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 13 00:40:21.885329 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 13 00:40:21.885341 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:40:21.885357 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:40:21.885369 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:40:21.885381 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 13 00:40:21.885393 kernel: RETBleed: Vulnerable Mar 13 00:40:21.885405 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:40:21.885417 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:40:21.885430 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:40:21.885443 kernel: GDS: Unknown: Dependent on hypervisor status Mar 13 00:40:21.885455 kernel: active return thunk: its_return_thunk Mar 13 00:40:21.885468 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:40:21.885480 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:40:21.885496 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:40:21.885510 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:40:21.885525 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 13 00:40:21.885540 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 13 00:40:21.885555 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 13 00:40:21.885569 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 13 00:40:21.885584 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 13 00:40:21.885599 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:40:21.885614 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:40:21.885629 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 13 00:40:21.885649 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 13 00:40:21.885664 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 13 00:40:21.885699 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 13 00:40:21.885714 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 13 00:40:21.885729 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 13 00:40:21.885744 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 13 00:40:21.885761 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:40:21.885776 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:40:21.885792 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:40:21.885808 kernel: landlock: Up and running. Mar 13 00:40:21.885823 kernel: SELinux: Initializing. Mar 13 00:40:21.885839 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 13 00:40:21.885885 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 13 00:40:21.885899 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 13 00:40:21.885914 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 13 00:40:21.886108 kernel: signal: max sigframe size: 3632 Mar 13 00:40:21.886123 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:40:21.886139 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:40:21.886153 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:40:21.886168 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:40:21.886183 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:40:21.886197 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:40:21.886217 kernel: .... node #0, CPUs: #1 Mar 13 00:40:21.886233 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 13 00:40:21.886250 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 13 00:40:21.886264 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:40:21.886279 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 13 00:40:21.886293 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 133384K reserved, 0K cma-reserved) Mar 13 00:40:21.886308 kernel: devtmpfs: initialized Mar 13 00:40:21.886322 kernel: x86/mm: Memory block size: 128MB Mar 13 00:40:21.886341 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 13 00:40:21.886354 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:40:21.886369 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:40:21.886384 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:40:21.886400 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:40:21.886416 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:40:21.886433 kernel: audit: type=2000 audit(1773362420.097:1): state=initialized audit_enabled=0 res=1 Mar 13 00:40:21.886449 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:40:21.886466 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:40:21.886486 kernel: cpuidle: using governor menu Mar 13 00:40:21.886504 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:40:21.886520 kernel: dca service started, version 1.12.1 Mar 13 00:40:21.886537 kernel: PCI: Using configuration type 1 for base access Mar 13 00:40:21.886555 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:40:21.886572 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:40:21.886588 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:40:21.886604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:40:21.886621 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:40:21.886642 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:40:21.886658 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:40:21.891104 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:40:21.891131 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 13 00:40:21.891146 kernel: ACPI: Interpreter enabled Mar 13 00:40:21.891162 kernel: ACPI: PM: (supports S0 S5) Mar 13 00:40:21.891177 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:40:21.891192 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:40:21.891209 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:40:21.891230 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 13 00:40:21.891245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:40:21.891494 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:40:21.891646 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 13 00:40:21.891824 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 13 00:40:21.891845 kernel: acpiphp: Slot [3] registered Mar 13 00:40:21.891861 kernel: acpiphp: Slot [4] registered Mar 13 00:40:21.891881 kernel: acpiphp: Slot [5] registered Mar 13 00:40:21.891897 kernel: acpiphp: Slot [6] registered Mar 13 00:40:21.891912 kernel: acpiphp: Slot [7] registered Mar 13 00:40:21.891927 kernel: acpiphp: Slot [8] registered Mar 13 00:40:21.891942 kernel: acpiphp: Slot [9] registered Mar 13 00:40:21.891958 kernel: acpiphp: Slot [10] registered Mar 13 00:40:21.891973 kernel: acpiphp: Slot [11] registered Mar 13 00:40:21.891988 kernel: acpiphp: Slot [12] registered Mar 13 00:40:21.892003 kernel: acpiphp: Slot [13] registered Mar 13 00:40:21.892021 kernel: acpiphp: Slot [14] registered Mar 13 00:40:21.892036 kernel: acpiphp: Slot [15] registered Mar 13 00:40:21.892050 kernel: acpiphp: Slot [16] registered Mar 13 00:40:21.892065 kernel: acpiphp: Slot [17] registered Mar 13 00:40:21.892080 kernel: acpiphp: Slot [18] registered Mar 13 00:40:21.892095 kernel: acpiphp: Slot [19] registered Mar 13 00:40:21.892111 kernel: acpiphp: Slot [20] registered Mar 13 00:40:21.892125 kernel: acpiphp: Slot [21] registered Mar 13 00:40:21.892140 kernel: acpiphp: Slot [22] registered Mar 13 00:40:21.892154 kernel: acpiphp: Slot [23] registered Mar 13 00:40:21.892173 kernel: acpiphp: Slot [24] registered Mar 13 00:40:21.892190 kernel: acpiphp: Slot [25] registered Mar 13 00:40:21.892205 kernel: acpiphp: Slot [26] registered Mar 13 00:40:21.892220 kernel: acpiphp: Slot [27] registered Mar 13 00:40:21.892235 kernel: acpiphp: Slot [28] registered Mar 13 00:40:21.892249 kernel: acpiphp: Slot [29] registered Mar 13 00:40:21.892267 kernel: acpiphp: Slot [30] registered Mar 13 00:40:21.892280 kernel: acpiphp: Slot [31] registered Mar 13 00:40:21.892293 kernel: PCI host bridge to bus 0000:00 Mar 13 00:40:21.892457 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:40:21.892585 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:40:21.893874 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:40:21.894057 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 13 00:40:21.894184 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 13 00:40:21.894309 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:40:21.894482 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:40:21.894652 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:40:21.894829 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Mar 13 00:40:21.894972 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 13 00:40:21.895104 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 13 00:40:21.895242 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 13 00:40:21.895368 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 13 00:40:21.895498 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 13 00:40:21.895624 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 13 00:40:21.895767 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 13 00:40:21.895909 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:40:21.896043 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Mar 13 00:40:21.896176 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 13 00:40:21.896306 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:40:21.896453 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Mar 13 00:40:21.896585 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Mar 13 00:40:21.896761 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Mar 13 00:40:21.896891 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Mar 13 00:40:21.896911 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:40:21.896928 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:40:21.896945 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:40:21.896967 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:40:21.896984 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 13 00:40:21.897002 kernel: iommu: Default domain type: Translated Mar 13 00:40:21.897019 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:40:21.897036 kernel: efivars: Registered efivars operations Mar 13 00:40:21.897052 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:40:21.897069 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:40:21.897086 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 13 00:40:21.897102 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 13 00:40:21.897121 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 13 00:40:21.897263 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 13 00:40:21.897392 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 13 00:40:21.897518 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:40:21.897538 kernel: vgaarb: loaded Mar 13 00:40:21.897554 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 13 00:40:21.897569 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 13 00:40:21.897585 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:40:21.897604 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:40:21.897619 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:40:21.897635 kernel: pnp: PnP ACPI init Mar 13 00:40:21.897651 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:40:21.897666 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:40:21.897700 kernel: NET: Registered PF_INET protocol family Mar 13 00:40:21.897715 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:40:21.897731 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 13 00:40:21.897746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:40:21.897766 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:40:21.897781 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 13 00:40:21.897797 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 13 00:40:21.897813 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 13 00:40:21.897829 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 13 00:40:21.897845 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:40:21.897861 kernel: NET: Registered PF_XDP protocol family Mar 13 00:40:21.898015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:40:21.898140 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:40:21.898267 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:40:21.898389 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 13 00:40:21.898508 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 13 00:40:21.898648 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 13 00:40:21.898701 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:40:21.898718 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 13 00:40:21.898735 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 13 00:40:21.898751 kernel: clocksource: Switched to clocksource tsc Mar 13 00:40:21.898770 kernel: Initialise system trusted keyrings Mar 13 00:40:21.898785 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 13 00:40:21.898799 kernel: Key type asymmetric registered Mar 13 00:40:21.898814 kernel: Asymmetric key parser 'x509' registered Mar 13 00:40:21.898828 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:40:21.898843 kernel: io scheduler mq-deadline registered Mar 13 00:40:21.898864 kernel: io scheduler kyber registered Mar 13 00:40:21.898884 kernel: io scheduler bfq registered Mar 13 00:40:21.898905 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:40:21.898932 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:40:21.898948 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:40:21.898964 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:40:21.898980 kernel: i8042: Warning: Keylock active Mar 13 00:40:21.898995 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:40:21.899011 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:40:21.899180 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 13 00:40:21.899322 kernel: rtc_cmos 00:00: registered as rtc0 Mar 13 00:40:21.899451 kernel: rtc_cmos 00:00: setting system clock to 2026-03-13T00:40:21 UTC (1773362421) Mar 13 00:40:21.899574 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 13 00:40:21.899614 kernel: intel_pstate: CPU model not supported Mar 13 00:40:21.899634 kernel: efifb: probing for efifb Mar 13 00:40:21.899650 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 13 00:40:21.899667 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 13 00:40:21.899702 kernel: efifb: scrolling: redraw Mar 13 00:40:21.899718 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:40:21.899735 kernel: Console: switching to colour frame buffer device 100x37 Mar 13 00:40:21.899755 kernel: fb0: EFI VGA frame buffer device Mar 13 00:40:21.899771 kernel: pstore: Using crash dump compression: deflate Mar 13 00:40:21.899787 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:40:21.899804 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:40:21.899820 kernel: Segment Routing with IPv6 Mar 13 00:40:21.899836 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:40:21.899853 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:40:21.899869 kernel: Key type dns_resolver registered Mar 13 00:40:21.899885 kernel: IPI shorthand broadcast: enabled Mar 13 00:40:21.899904 kernel: sched_clock: Marking stable (2555002730, 170081708)->(2805386253, -80301815) Mar 13 00:40:21.899920 kernel: registered taskstats version 1 Mar 13 00:40:21.899936 kernel: Loading compiled-in X.509 certificates Mar 13 00:40:21.899953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:40:21.899969 kernel: Demotion targets for Node 0: null Mar 13 00:40:21.899985 kernel: Key type .fscrypt registered Mar 13 00:40:21.900001 kernel: Key type fscrypt-provisioning registered Mar 13 00:40:21.900016 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:40:21.900035 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:40:21.900054 kernel: ima: No architecture policies found Mar 13 00:40:21.900071 kernel: clk: Disabling unused clocks Mar 13 00:40:21.900087 kernel: Warning: unable to open an initial console. Mar 13 00:40:21.900103 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:40:21.900120 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:40:21.900140 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:40:21.900159 kernel: Run /init as init process Mar 13 00:40:21.900175 kernel: with arguments: Mar 13 00:40:21.900191 kernel: /init Mar 13 00:40:21.900207 kernel: with environment: Mar 13 00:40:21.900223 kernel: HOME=/ Mar 13 00:40:21.900239 kernel: TERM=linux Mar 13 00:40:21.900257 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:40:21.900278 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:40:21.900299 systemd[1]: Detected virtualization amazon. Mar 13 00:40:21.900315 systemd[1]: Detected architecture x86-64. Mar 13 00:40:21.900331 systemd[1]: Running in initrd. Mar 13 00:40:21.900347 systemd[1]: No hostname configured, using default hostname. Mar 13 00:40:21.900365 systemd[1]: Hostname set to . Mar 13 00:40:21.900382 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:40:21.900398 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:40:21.900418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:21.900435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:21.900453 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:40:21.900470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:40:21.900487 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:40:21.900506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:40:21.900524 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:40:21.900545 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:40:21.900562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:21.900579 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:21.900596 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:40:21.900613 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:40:21.900630 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:40:21.900647 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:40:21.900665 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:40:21.900699 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:40:21.900719 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:40:21.900736 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:40:21.900753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:21.900770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:21.900787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:21.900804 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:40:21.900821 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:40:21.900838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:40:21.900859 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:40:21.900876 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:40:21.900893 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:40:21.900910 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:40:21.900928 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:40:21.900945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:21.900961 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:40:21.900982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:21.901000 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:40:21.901043 systemd-journald[188]: Collecting audit messages is disabled. Mar 13 00:40:21.901084 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:40:21.901102 systemd-journald[188]: Journal started Mar 13 00:40:21.901136 systemd-journald[188]: Runtime Journal (/run/log/journal/ec23cc328b774aff001a58ea04fbec9c) is 4.7M, max 38.1M, 33.3M free. Mar 13 00:40:21.906702 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:40:21.912883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:40:21.916177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:21.916603 systemd-modules-load[189]: Inserted module 'overlay' Mar 13 00:40:21.922850 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:40:21.930196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:40:21.933822 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:40:21.944642 systemd-tmpfiles[202]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:40:21.958718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:21.968917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:21.973532 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:40:21.976710 kernel: Bridge firewalling registered Mar 13 00:40:21.976914 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 13 00:40:21.978740 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:21.981891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:40:21.983709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:40:21.987342 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:40:22.000324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:40:22.004047 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:40:22.006852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:40:22.011285 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:22.069289 systemd-resolved[237]: Positive Trust Anchors: Mar 13 00:40:22.069805 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:40:22.069869 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:40:22.078234 systemd-resolved[237]: Defaulting to hostname 'linux'. Mar 13 00:40:22.081510 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:40:22.082216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:40:22.113709 kernel: SCSI subsystem initialized Mar 13 00:40:22.123709 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:40:22.134714 kernel: iscsi: registered transport (tcp) Mar 13 00:40:22.156216 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:40:22.156299 kernel: QLogic iSCSI HBA Driver Mar 13 00:40:22.174796 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:40:22.196464 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:40:22.199365 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:40:22.243615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:40:22.245736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:40:22.297705 kernel: raid6: avx512x4 gen() 17819 MB/s Mar 13 00:40:22.315696 kernel: raid6: avx512x2 gen() 17775 MB/s Mar 13 00:40:22.333699 kernel: raid6: avx512x1 gen() 17727 MB/s Mar 13 00:40:22.351699 kernel: raid6: avx2x4 gen() 17506 MB/s Mar 13 00:40:22.369698 kernel: raid6: avx2x2 gen() 17649 MB/s Mar 13 00:40:22.388570 kernel: raid6: avx2x1 gen() 13551 MB/s Mar 13 00:40:22.388634 kernel: raid6: using algorithm avx512x4 gen() 17819 MB/s Mar 13 00:40:22.407210 kernel: raid6: .... xor() 7597 MB/s, rmw enabled Mar 13 00:40:22.407284 kernel: raid6: using avx512x2 recovery algorithm Mar 13 00:40:22.428715 kernel: xor: automatically using best checksumming function avx Mar 13 00:40:22.592708 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:40:22.599930 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:40:22.602855 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:40:22.629523 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 13 00:40:22.636179 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:40:22.641467 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:40:22.667436 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 13 00:40:22.694068 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:40:22.695940 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:40:22.755279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:40:22.759880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:40:22.854733 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 13 00:40:22.858701 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 13 00:40:22.869878 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 13 00:40:22.870241 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 13 00:40:22.876481 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:40:22.876547 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 13 00:40:22.876791 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 13 00:40:22.894606 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:40:22.894700 kernel: GPT:9289727 != 33554431 Mar 13 00:40:22.894723 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:40:22.894743 kernel: GPT:9289727 != 33554431 Mar 13 00:40:22.894762 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:40:22.894790 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:40:22.894812 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:6f:22:88:94:35 Mar 13 00:40:22.895050 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:40:22.904582 (udev-worker)[491]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:40:22.917691 kernel: AES CTR mode by8 optimization enabled Mar 13 00:40:22.918219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:22.918404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:22.920806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:22.923230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:22.925488 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:22.938530 kernel: nvme nvme0: using unchecked data buffer Mar 13 00:40:22.963076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:22.964545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:22.967160 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:22.983481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:23.031424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:23.061132 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 13 00:40:23.119332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 13 00:40:23.120859 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:40:23.130773 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 13 00:40:23.131346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 13 00:40:23.142826 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 13 00:40:23.143474 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:40:23.144754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:40:23.145836 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:40:23.147570 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:40:23.151848 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:40:23.168213 disk-uuid[678]: Primary Header is updated. Mar 13 00:40:23.168213 disk-uuid[678]: Secondary Entries is updated. Mar 13 00:40:23.168213 disk-uuid[678]: Secondary Header is updated. Mar 13 00:40:23.174721 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:40:23.177563 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:40:23.195736 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:40:24.191365 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:40:24.191440 disk-uuid[681]: The operation has completed successfully. Mar 13 00:40:24.348882 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:40:24.349011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:40:24.376764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:40:24.395341 sh[944]: Success Mar 13 00:40:24.423036 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:40:24.423112 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:40:24.424706 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:40:24.436757 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 13 00:40:24.535782 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:40:24.539799 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:40:24.557322 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:40:24.575712 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (967) Mar 13 00:40:24.579648 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:40:24.579738 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:24.606819 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:40:24.606895 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:40:24.609352 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:40:24.623781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:40:24.624856 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:40:24.625539 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:40:24.626999 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:40:24.630385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:40:24.668697 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1000) Mar 13 00:40:24.673734 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:24.673806 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:24.696093 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:40:24.696164 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:40:24.705714 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:24.706340 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:40:24.709107 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:40:24.745404 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:40:24.748079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:40:24.799886 systemd-networkd[1136]: lo: Link UP Mar 13 00:40:24.799898 systemd-networkd[1136]: lo: Gained carrier Mar 13 00:40:24.801855 systemd-networkd[1136]: Enumeration completed Mar 13 00:40:24.802473 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:40:24.802599 systemd-networkd[1136]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:24.802605 systemd-networkd[1136]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:40:24.804469 systemd[1]: Reached target network.target - Network. Mar 13 00:40:24.807415 systemd-networkd[1136]: eth0: Link UP Mar 13 00:40:24.807420 systemd-networkd[1136]: eth0: Gained carrier Mar 13 00:40:24.807439 systemd-networkd[1136]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:24.820804 systemd-networkd[1136]: eth0: DHCPv4 address 172.31.30.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 13 00:40:25.123025 ignition[1093]: Ignition 2.22.0 Mar 13 00:40:25.123741 ignition[1093]: Stage: fetch-offline Mar 13 00:40:25.123982 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:25.123992 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:25.124638 ignition[1093]: Ignition finished successfully Mar 13 00:40:25.126526 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:40:25.128887 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:40:25.163422 ignition[1145]: Ignition 2.22.0 Mar 13 00:40:25.163440 ignition[1145]: Stage: fetch Mar 13 00:40:25.163850 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:25.163863 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:25.163971 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:25.182359 ignition[1145]: PUT result: OK Mar 13 00:40:25.184349 ignition[1145]: parsed url from cmdline: "" Mar 13 00:40:25.184356 ignition[1145]: no config URL provided Mar 13 00:40:25.184363 ignition[1145]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:40:25.184743 ignition[1145]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:40:25.184770 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:25.185640 ignition[1145]: PUT result: OK Mar 13 00:40:25.185704 ignition[1145]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 13 00:40:25.186294 ignition[1145]: GET result: OK Mar 13 00:40:25.186391 ignition[1145]: parsing config with SHA512: 39bd0e57e8affdf07a0948022287fbccba456d9032f4cfda7e559d3eb98d38ec8ad2f3252ae5b9c8da0d22fbad47b2bbdadc238e5782be5102d56a04bdfd5bbb Mar 13 00:40:25.191151 unknown[1145]: fetched base config from "system" Mar 13 00:40:25.191180 unknown[1145]: fetched base config from "system" Mar 13 00:40:25.191200 unknown[1145]: fetched user config from "aws" Mar 13 00:40:25.193699 ignition[1145]: fetch: fetch complete Mar 13 00:40:25.193717 ignition[1145]: fetch: fetch passed Mar 13 00:40:25.193808 ignition[1145]: Ignition finished successfully Mar 13 00:40:25.196767 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:40:25.198380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:40:25.242617 ignition[1152]: Ignition 2.22.0 Mar 13 00:40:25.242633 ignition[1152]: Stage: kargs Mar 13 00:40:25.243045 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:25.243057 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:25.243169 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:25.244050 ignition[1152]: PUT result: OK Mar 13 00:40:25.246331 ignition[1152]: kargs: kargs passed Mar 13 00:40:25.246405 ignition[1152]: Ignition finished successfully Mar 13 00:40:25.248490 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:40:25.249908 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:40:25.279462 ignition[1158]: Ignition 2.22.0 Mar 13 00:40:25.279479 ignition[1158]: Stage: disks Mar 13 00:40:25.279867 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:25.279879 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:25.279986 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:25.280792 ignition[1158]: PUT result: OK Mar 13 00:40:25.285035 ignition[1158]: disks: disks passed Mar 13 00:40:25.285137 ignition[1158]: Ignition finished successfully Mar 13 00:40:25.287185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:40:25.287853 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:40:25.288256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:40:25.288805 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:40:25.289346 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:40:25.290129 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:40:25.291720 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:40:25.347198 systemd-fsck[1167]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:40:25.351045 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:40:25.352926 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:40:25.509699 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:40:25.510878 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:40:25.512001 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:40:25.513764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:40:25.516470 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:40:25.519281 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:40:25.519819 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:40:25.519856 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:40:25.528584 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:40:25.530895 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:40:25.543708 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1186) Mar 13 00:40:25.546685 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:25.548692 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:25.554935 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:40:25.555010 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:40:25.557593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:40:25.835223 initrd-setup-root[1210]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:40:25.841306 initrd-setup-root[1217]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:40:25.846969 initrd-setup-root[1224]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:40:25.851651 initrd-setup-root[1231]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:40:26.001735 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:40:26.004058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:40:26.006812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:40:26.027293 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:40:26.029425 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:26.058860 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:40:26.065838 ignition[1299]: INFO : Ignition 2.22.0 Mar 13 00:40:26.065838 ignition[1299]: INFO : Stage: mount Mar 13 00:40:26.067631 ignition[1299]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:26.067631 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:26.067631 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:26.067631 ignition[1299]: INFO : PUT result: OK Mar 13 00:40:26.070123 ignition[1299]: INFO : mount: mount passed Mar 13 00:40:26.070660 ignition[1299]: INFO : Ignition finished successfully Mar 13 00:40:26.072101 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:40:26.073941 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:40:26.091583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:40:26.124705 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1310) Mar 13 00:40:26.128637 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:26.128715 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:26.136242 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:40:26.136323 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:40:26.138404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:40:26.172635 ignition[1327]: INFO : Ignition 2.22.0 Mar 13 00:40:26.172635 ignition[1327]: INFO : Stage: files Mar 13 00:40:26.174165 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:26.174165 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:26.174165 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:26.174165 ignition[1327]: INFO : PUT result: OK Mar 13 00:40:26.176740 ignition[1327]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:40:26.177589 ignition[1327]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:40:26.177589 ignition[1327]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:40:26.180713 ignition[1327]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:40:26.181713 ignition[1327]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:40:26.182515 ignition[1327]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:40:26.181729 unknown[1327]: wrote ssh authorized keys file for user: core Mar 13 00:40:26.184746 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:40:26.185691 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:40:26.272762 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:40:26.448000 systemd-networkd[1136]: eth0: Gained IPv6LL Mar 13 00:40:26.452438 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:40:26.453776 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:40:26.459663 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:40:26.459663 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:40:26.459663 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:40:26.462493 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:40:26.462493 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:40:26.462493 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 13 00:40:26.922354 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:40:27.442246 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:40:27.442246 ignition[1327]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:40:27.444721 ignition[1327]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:40:27.449111 ignition[1327]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:40:27.449111 ignition[1327]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:40:27.449111 ignition[1327]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:40:27.452993 ignition[1327]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:40:27.452993 ignition[1327]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:40:27.452993 ignition[1327]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:40:27.452993 ignition[1327]: INFO : files: files passed Mar 13 00:40:27.452993 ignition[1327]: INFO : Ignition finished successfully Mar 13 00:40:27.452616 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:40:27.454689 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:40:27.459900 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:40:27.470452 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:40:27.470902 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:40:27.485845 initrd-setup-root-after-ignition[1357]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:40:27.485845 initrd-setup-root-after-ignition[1357]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:40:27.489914 initrd-setup-root-after-ignition[1361]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:40:27.492069 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:40:27.492808 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:40:27.494905 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:40:27.554496 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:40:27.554615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:40:27.555524 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:40:27.556266 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:40:27.557488 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:40:27.558854 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:40:27.597771 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:40:27.600027 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:40:27.619701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:40:27.620367 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:40:27.621400 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:40:27.622397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:40:27.622627 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:40:27.623742 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:40:27.624513 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:40:27.625320 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:40:27.626273 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:40:27.626982 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:40:27.627723 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:40:27.628512 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:40:27.629256 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:40:27.630211 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:40:27.631176 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:40:27.632006 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:40:27.632710 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:40:27.632946 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:40:27.634133 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:27.634927 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:27.635587 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:40:27.635741 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:27.636369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:40:27.636585 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:40:27.637891 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:40:27.638348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:40:27.638966 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:40:27.639124 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:40:27.642969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:40:27.643399 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:40:27.643667 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:27.646973 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:40:27.649760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:40:27.650080 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:40:27.651176 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:40:27.651390 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:40:27.657412 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:40:27.662890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:40:27.690631 ignition[1381]: INFO : Ignition 2.22.0 Mar 13 00:40:27.691586 ignition[1381]: INFO : Stage: umount Mar 13 00:40:27.693107 ignition[1381]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:27.693107 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:40:27.693107 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:40:27.692253 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:40:27.695305 ignition[1381]: INFO : PUT result: OK Mar 13 00:40:27.696641 ignition[1381]: INFO : umount: umount passed Mar 13 00:40:27.697177 ignition[1381]: INFO : Ignition finished successfully Mar 13 00:40:27.698796 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:40:27.698980 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:40:27.700418 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:40:27.700536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:40:27.701049 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:40:27.701115 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:40:27.701751 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:40:27.701817 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:40:27.702634 systemd[1]: Stopped target network.target - Network. Mar 13 00:40:27.703060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:40:27.703131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:40:27.704015 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:40:27.704665 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:40:27.708800 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:27.709423 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:40:27.711039 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:40:27.711853 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:40:27.711922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:40:27.712510 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:40:27.712567 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:40:27.713166 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:40:27.713248 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:40:27.713873 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:40:27.713920 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:40:27.714719 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:40:27.715120 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:40:27.723493 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:40:27.723645 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:40:27.728244 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:40:27.728609 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:40:27.728835 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:40:27.731473 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:40:27.732358 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:40:27.732904 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:40:27.732962 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:27.734899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:40:27.736163 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:40:27.736237 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:40:27.736862 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:40:27.736924 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:40:27.740366 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:40:27.740447 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:27.741875 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:40:27.741953 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:27.742910 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:40:27.745508 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:40:27.745599 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:27.760057 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:40:27.761047 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:40:27.764120 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:40:27.764288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:40:27.765597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:40:27.766219 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:27.766778 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:40:27.766822 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:27.767476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:40:27.767539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:40:27.768778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:40:27.768839 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:40:27.769915 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:40:27.770092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:40:27.772256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:40:27.776287 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:40:27.776388 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:40:27.779421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:40:27.779502 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:27.780403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:27.780466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:27.784147 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:40:27.784230 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:40:27.784292 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:27.791423 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:40:27.791529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:40:27.872970 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:40:27.873082 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:40:27.874424 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:40:27.875413 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:40:27.875516 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:40:27.877352 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:40:27.893530 systemd[1]: Switching root. Mar 13 00:40:27.944444 systemd-journald[188]: Journal stopped Mar 13 00:40:29.952261 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 13 00:40:29.952338 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:40:29.952366 kernel: SELinux: policy capability open_perms=1 Mar 13 00:40:29.952386 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:40:29.952406 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:40:29.952425 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:40:29.952450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:40:29.952472 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:40:29.952491 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:40:29.952534 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:40:29.952553 kernel: audit: type=1403 audit(1773362428.693:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:40:29.952583 systemd[1]: Successfully loaded SELinux policy in 70.950ms. Mar 13 00:40:29.952611 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.476ms. Mar 13 00:40:29.952633 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:40:29.952653 systemd[1]: Detected virtualization amazon. Mar 13 00:40:29.970640 systemd[1]: Detected architecture x86-64. Mar 13 00:40:29.970705 systemd[1]: Detected first boot. Mar 13 00:40:29.970729 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:40:29.970751 zram_generator::config[1425]: No configuration found. Mar 13 00:40:29.970774 kernel: Guest personality initialized and is inactive Mar 13 00:40:29.970796 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:40:29.970817 kernel: Initialized host personality Mar 13 00:40:29.970839 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:40:29.970859 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:40:29.970886 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:40:29.970907 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:40:29.970929 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:40:29.970951 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:40:29.970973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:40:29.970995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:40:29.971016 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:40:29.971037 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:40:29.971059 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:40:29.971084 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:40:29.971105 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:40:29.971127 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:40:29.971150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:29.971171 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:29.971193 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:40:29.971214 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:40:29.971237 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:40:29.971261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:40:29.971293 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:40:29.971315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:29.971336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:29.971356 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:40:29.971376 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:40:29.971398 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:40:29.971419 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:40:29.971447 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:40:29.971468 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:40:29.971488 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:40:29.971509 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:40:29.971529 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:40:29.971554 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:40:29.971575 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:40:29.971596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:29.971617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:29.971639 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:29.971662 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:40:29.976744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:40:29.976778 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:40:29.976799 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:40:29.976820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:29.976841 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:40:29.976861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:40:29.976881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:40:29.976910 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:40:29.976930 systemd[1]: Reached target machines.target - Containers. Mar 13 00:40:29.976951 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:40:29.976972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:29.976992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:40:29.977013 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:40:29.977033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:40:29.977052 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:40:29.977068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:40:29.977088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:40:29.977108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:40:29.977128 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:40:29.977147 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:40:29.977166 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:40:29.977185 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:40:29.977202 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:40:29.977222 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:29.977245 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:40:29.977264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:40:29.977284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:40:29.977303 kernel: loop: module loaded Mar 13 00:40:29.977324 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:40:29.977345 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:40:29.977371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:40:29.977390 kernel: fuse: init (API version 7.41) Mar 13 00:40:29.977410 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:40:29.977430 systemd[1]: Stopped verity-setup.service. Mar 13 00:40:29.977451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:29.977473 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:40:29.977493 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:40:29.977512 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:40:29.977532 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:40:29.977551 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:40:29.977571 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:40:29.977592 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:29.977612 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:40:29.977636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:40:29.977655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:40:29.978732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:40:29.978768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:40:29.978788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:40:29.978809 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:40:29.978830 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:40:29.978850 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:40:29.978871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:40:29.978898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:40:29.978919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:29.978940 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:40:29.978961 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:40:29.979028 systemd-journald[1511]: Collecting audit messages is disabled. Mar 13 00:40:29.979066 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:40:29.979087 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:40:29.979110 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:40:29.979130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:40:29.979149 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:40:29.979168 kernel: ACPI: bus type drm_connector registered Mar 13 00:40:29.979188 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:40:29.979211 systemd-journald[1511]: Journal started Mar 13 00:40:29.979248 systemd-journald[1511]: Runtime Journal (/run/log/journal/ec23cc328b774aff001a58ea04fbec9c) is 4.7M, max 38.1M, 33.3M free. Mar 13 00:40:29.987722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:40:29.511832 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:40:29.524961 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 13 00:40:29.525400 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:40:29.994521 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:40:29.998711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:30.011151 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:40:30.011255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:40:30.023265 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:40:30.023354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:40:30.029375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:40:30.040540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:40:30.042412 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:40:30.049766 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:40:30.053889 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:40:30.054165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:40:30.055991 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:40:30.056898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:40:30.057509 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:40:30.059914 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:40:30.094585 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:40:30.105124 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:40:30.114325 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:40:30.115744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:40:30.128702 kernel: loop0: detected capacity change from 0 to 72368 Mar 13 00:40:30.161011 systemd-journald[1511]: Time spent on flushing to /var/log/journal/ec23cc328b774aff001a58ea04fbec9c is 61.386ms for 1026 entries. Mar 13 00:40:30.161011 systemd-journald[1511]: System Journal (/var/log/journal/ec23cc328b774aff001a58ea04fbec9c) is 8M, max 195.6M, 187.6M free. Mar 13 00:40:30.236984 systemd-journald[1511]: Received client request to flush runtime journal. Mar 13 00:40:30.237052 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:40:30.187802 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:40:30.192902 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:40:30.205894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:40:30.246730 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:40:30.268530 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Mar 13 00:40:30.268561 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Mar 13 00:40:30.276704 kernel: loop1: detected capacity change from 0 to 128560 Mar 13 00:40:30.280116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:30.419834 kernel: loop2: detected capacity change from 0 to 217752 Mar 13 00:40:30.527622 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:40:30.575694 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:40:30.721784 kernel: loop4: detected capacity change from 0 to 72368 Mar 13 00:40:30.743708 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 00:40:30.768705 kernel: loop6: detected capacity change from 0 to 217752 Mar 13 00:40:30.799706 kernel: loop7: detected capacity change from 0 to 110984 Mar 13 00:40:30.813721 (sd-merge)[1585]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 13 00:40:30.816919 (sd-merge)[1585]: Merged extensions into '/usr'. Mar 13 00:40:30.825800 systemd[1]: Reload requested from client PID 1540 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:40:30.825820 systemd[1]: Reloading... Mar 13 00:40:30.940753 zram_generator::config[1611]: No configuration found. Mar 13 00:40:31.182482 systemd[1]: Reloading finished in 355 ms. Mar 13 00:40:31.218455 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:40:31.219325 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:40:31.229948 systemd[1]: Starting ensure-sysext.service... Mar 13 00:40:31.233853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:40:31.241067 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:40:31.267404 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:40:31.267447 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:40:31.267814 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:40:31.268214 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:40:31.271047 systemd[1]: Reload requested from client PID 1663 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:40:31.271070 systemd[1]: Reloading... Mar 13 00:40:31.273619 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:40:31.276293 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Mar 13 00:40:31.276386 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Mar 13 00:40:31.291650 systemd-tmpfiles[1664]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:40:31.293833 systemd-tmpfiles[1664]: Skipping /boot Mar 13 00:40:31.298367 systemd-udevd[1665]: Using default interface naming scheme 'v255'. Mar 13 00:40:31.314944 systemd-tmpfiles[1664]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:40:31.314961 systemd-tmpfiles[1664]: Skipping /boot Mar 13 00:40:31.398873 zram_generator::config[1690]: No configuration found. Mar 13 00:40:31.458282 ldconfig[1536]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:40:31.715739 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:40:31.783697 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:40:31.795722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:40:31.810612 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:40:31.810715 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 13 00:40:31.812696 kernel: ACPI: button: Sleep Button [SLPF] Mar 13 00:40:31.848304 systemd[1]: Reloading finished in 576 ms. Mar 13 00:40:31.850691 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 13 00:40:31.861776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:40:31.864261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:40:31.865424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:31.901780 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:40:31.920896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:31.924958 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:40:31.932078 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:40:31.933003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:31.937112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:40:31.946139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:40:31.957212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:40:31.958233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:31.958790 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:31.967072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:40:31.971784 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:40:31.980148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:40:31.992062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:40:31.992723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:31.997896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:40:31.998654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:40:32.000444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:40:32.001752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:40:32.005439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:40:32.014105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:32.014410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:32.021205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:40:32.024251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:40:32.024874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:32.025039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:32.025185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:32.031750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:32.032087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:32.035432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:40:32.036164 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:32.036312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:32.036565 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:40:32.037362 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:32.049328 systemd[1]: Finished ensure-sysext.service. Mar 13 00:40:32.058961 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:40:32.084606 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:40:32.100793 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:40:32.102745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:40:32.128243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:40:32.128525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:40:32.129413 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:40:32.130071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:40:32.130235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:40:32.130916 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:40:32.131986 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:40:32.132217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:40:32.154390 augenrules[1901]: No rules Mar 13 00:40:32.155475 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:40:32.155822 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:40:32.169643 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:40:32.172562 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:40:32.182006 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:40:32.183837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:40:32.223749 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:40:32.303275 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:40:32.358866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:32.377329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 13 00:40:32.389126 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:40:32.410780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:32.411177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:32.416633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:32.455720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:40:32.544042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:32.548560 systemd-networkd[1838]: lo: Link UP Mar 13 00:40:32.549174 systemd-networkd[1838]: lo: Gained carrier Mar 13 00:40:32.551172 systemd-resolved[1841]: Positive Trust Anchors: Mar 13 00:40:32.551191 systemd-resolved[1841]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:40:32.551241 systemd-resolved[1841]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:40:32.551682 systemd-networkd[1838]: Enumeration completed Mar 13 00:40:32.551805 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:40:32.552624 systemd-networkd[1838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:32.552630 systemd-networkd[1838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:40:32.555951 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:40:32.558789 systemd-networkd[1838]: eth0: Link UP Mar 13 00:40:32.559076 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:40:32.559482 systemd-resolved[1841]: Defaulting to hostname 'linux'. Mar 13 00:40:32.560591 systemd-networkd[1838]: eth0: Gained carrier Mar 13 00:40:32.560618 systemd-networkd[1838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:32.562593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:40:32.563809 systemd[1]: Reached target network.target - Network. Mar 13 00:40:32.564780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:40:32.565760 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:40:32.566369 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:40:32.567791 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:40:32.568389 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:40:32.569487 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:40:32.570269 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:40:32.570834 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:40:32.571370 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:40:32.571405 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:40:32.571895 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:40:32.572767 systemd-networkd[1838]: eth0: DHCPv4 address 172.31.30.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 13 00:40:32.573353 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:40:32.577882 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:40:32.582590 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:40:32.583549 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:40:32.584336 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:40:32.591503 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:40:32.592435 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:40:32.593719 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:40:32.595305 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:40:32.595814 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:40:32.596253 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:40:32.596298 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:40:32.602254 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:40:32.606944 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:40:32.611148 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:40:32.613084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:40:32.622811 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:40:32.628873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:40:32.629482 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:40:32.633310 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:40:32.636537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:40:32.641176 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:40:32.646895 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:40:32.651874 jq[1954]: false Mar 13 00:40:32.652359 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 13 00:40:32.655929 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:40:32.663176 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:40:32.686235 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:40:32.689028 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:40:32.689799 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:40:32.691521 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:40:32.696001 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:40:32.705474 extend-filesystems[1955]: Found /dev/nvme0n1p6 Mar 13 00:40:32.700890 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:40:32.710054 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:40:32.711125 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:40:32.712587 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:40:32.716449 oslogin_cache_refresh[1956]: Refreshing passwd entry cache Mar 13 00:40:32.731355 jq[1968]: true Mar 13 00:40:32.731559 google_oslogin_nss_cache[1956]: oslogin_cache_refresh[1956]: Refreshing passwd entry cache Mar 13 00:40:32.758762 extend-filesystems[1955]: Found /dev/nvme0n1p9 Mar 13 00:40:32.766792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:40:32.767548 oslogin_cache_refresh[1956]: Failure getting users, quitting Mar 13 00:40:32.773156 google_oslogin_nss_cache[1956]: oslogin_cache_refresh[1956]: Failure getting users, quitting Mar 13 00:40:32.773156 google_oslogin_nss_cache[1956]: oslogin_cache_refresh[1956]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:40:32.773156 google_oslogin_nss_cache[1956]: oslogin_cache_refresh[1956]: Refreshing group entry cache Mar 13 00:40:32.773156 google_oslogin_nss_cache[1956]: oslogin_cache_refresh[1956]: Failure getting groups, quitting Mar 13 00:40:32.773156 google_oslogin_nss_cache[1956]: oslogin_cache_refresh[1956]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:40:32.767073 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:40:32.767571 oslogin_cache_refresh[1956]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:40:32.769475 oslogin_cache_refresh[1956]: Refreshing group entry cache Mar 13 00:40:32.771037 oslogin_cache_refresh[1956]: Failure getting groups, quitting Mar 13 00:40:32.771051 oslogin_cache_refresh[1956]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:40:32.774543 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:40:32.781833 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Mar 13 00:40:32.782745 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:40:32.807807 (ntainerd)[1987]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:40:32.809062 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Mar 13 00:40:32.833217 tar[1972]: linux-amd64/LICENSE Mar 13 00:40:32.833559 extend-filesystems[2000]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:40:32.839766 tar[1972]: linux-amd64/helm Mar 13 00:40:32.861621 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 13 00:40:32.858569 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 13 00:40:32.861891 jq[1974]: true Mar 13 00:40:32.875921 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:40:32.876533 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:40:32.890921 update_engine[1967]: I20260313 00:40:32.890199 1967 main.cc:92] Flatcar Update Engine starting Mar 13 00:40:32.907831 dbus-daemon[1952]: [system] SELinux support is enabled Mar 13 00:40:32.920247 dbus-daemon[1952]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1838 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:40:33.004174 update_engine[1967]: I20260313 00:40:32.949924 1967 update_check_scheduler.cc:74] Next update check in 5m46s Mar 13 00:40:32.908511 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: ---------------------------------------------------- Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: corporation. Support and training for ntp-4 are Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: available at https://www.nwtime.org/support Mar 13 00:40:33.004703 ntpd[1958]: 13 Mar 00:40:32 ntpd[1958]: ---------------------------------------------------- Mar 13 00:40:32.928736 ntpd[1958]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:40:32.915287 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:40:32.928807 ntpd[1958]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:40:32.915324 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:40:32.928818 ntpd[1958]: ---------------------------------------------------- Mar 13 00:40:32.917628 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:40:32.928828 ntpd[1958]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:40:32.917653 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:40:32.928837 ntpd[1958]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:40:33.017821 kernel: ntpd[1958]: segfault at 24 ip 000055c4d15c8aeb sp 00007ffdeacc0d10 error 4 in ntpd[68aeb,55c4d1566000+80000] likely on CPU 1 (core 0, socket 0) Mar 13 00:40:33.017877 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 13 00:40:33.003569 systemd-logind[1966]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:40:32.928846 ntpd[1958]: corporation. Support and training for ntp-4 are Mar 13 00:40:33.018306 coreos-metadata[1951]: Mar 13 00:40:33.017 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: proto: precision = 0.072 usec (-24) Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: basedate set to 2026-02-28 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: gps base set to 2026-03-01 (week 2408) Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: Listen normally on 3 eth0 172.31.30.203:123 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: Listen normally on 4 lo [::1]:123 Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: bind(21) AF_INET6 [fe80::46f:22ff:fe88:9435%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:40:33.018534 ntpd[1958]: 13 Mar 00:40:33 ntpd[1958]: unable to create socket on eth0 (5) for [fe80::46f:22ff:fe88:9435%2]:123 Mar 13 00:40:33.003593 systemd-logind[1966]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 13 00:40:32.928855 ntpd[1958]: available at https://www.nwtime.org/support Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.021 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.022 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.023 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.023 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.025 INFO Fetch failed with 404: resource not found Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.028 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.029 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.030 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.034 INFO Fetch successful Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.034 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 13 00:40:33.103085 coreos-metadata[1951]: Mar 13 00:40:33.039 INFO Fetch successful Mar 13 00:40:33.003617 systemd-logind[1966]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:40:32.928863 ntpd[1958]: ---------------------------------------------------- Mar 13 00:40:33.004784 systemd-logind[1966]: New seat seat0. Mar 13 00:40:32.943594 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:40:33.006877 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:40:33.174435 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 13 00:40:33.174496 bash[2033]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:40:33.015794 ntpd[1958]: proto: precision = 0.072 usec (-24) Mar 13 00:40:33.007599 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:40:33.017163 ntpd[1958]: basedate set to 2026-02-28 Mar 13 00:40:33.008809 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:40:33.017182 ntpd[1958]: gps base set to 2026-03-01 (week 2408) Mar 13 00:40:33.016215 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:40:33.017317 ntpd[1958]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:40:33.185656 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 13 00:40:33.185656 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 13 00:40:33.185656 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 13 00:40:33.064274 systemd-coredump[2041]: Process 1958 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 13 00:40:33.017344 ntpd[1958]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:40:33.217765 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Mar 13 00:40:33.070183 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 13 00:40:33.017533 ntpd[1958]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:40:33.088602 systemd[1]: Started systemd-coredump@0-2041-0.service - Process Core Dump (PID 2041/UID 0). Mar 13 00:40:33.017566 ntpd[1958]: Listen normally on 3 eth0 172.31.30.203:123 Mar 13 00:40:33.135532 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:40:33.017594 ntpd[1958]: Listen normally on 4 lo [::1]:123 Mar 13 00:40:33.146783 systemd[1]: Starting sshkeys.service... Mar 13 00:40:33.017622 ntpd[1958]: bind(21) AF_INET6 [fe80::46f:22ff:fe88:9435%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:40:33.148600 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:40:33.017641 ntpd[1958]: unable to create socket on eth0 (5) for [fe80::46f:22ff:fe88:9435%2]:123 Mar 13 00:40:33.152065 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:40:33.184788 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:40:33.185902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:40:33.211084 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:40:33.217267 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:40:33.544067 coreos-metadata[2088]: Mar 13 00:40:33.543 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 13 00:40:33.546066 coreos-metadata[2088]: Mar 13 00:40:33.545 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 13 00:40:33.551107 coreos-metadata[2088]: Mar 13 00:40:33.550 INFO Fetch successful Mar 13 00:40:33.551107 coreos-metadata[2088]: Mar 13 00:40:33.551 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 13 00:40:33.552180 coreos-metadata[2088]: Mar 13 00:40:33.551 INFO Fetch successful Mar 13 00:40:33.556772 unknown[2088]: wrote ssh authorized keys file for user: core Mar 13 00:40:33.577218 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:40:33.607304 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:40:33.618630 update-ssh-keys[2147]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:40:33.618982 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:40:33.622006 systemd[1]: Finished sshkeys.service. Mar 13 00:40:33.626276 dbus-daemon[1952]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2038 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:40:33.657699 containerd[1987]: time="2026-03-13T00:40:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:40:33.661081 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:40:33.673664 containerd[1987]: time="2026-03-13T00:40:33.673287953Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:40:33.700610 locksmithd[2039]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:40:33.734076 containerd[1987]: time="2026-03-13T00:40:33.729639597Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.493µs" Mar 13 00:40:33.734076 containerd[1987]: time="2026-03-13T00:40:33.733969102Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:40:33.734076 containerd[1987]: time="2026-03-13T00:40:33.734051240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:40:33.735836 containerd[1987]: time="2026-03-13T00:40:33.734255988Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:40:33.735836 containerd[1987]: time="2026-03-13T00:40:33.734283389Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:40:33.735836 containerd[1987]: time="2026-03-13T00:40:33.734314573Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:40:33.735836 containerd[1987]: time="2026-03-13T00:40:33.734384702Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:40:33.735836 containerd[1987]: time="2026-03-13T00:40:33.734400115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.738820311Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.738863227Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.738886498Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.738899177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.739045110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.739344963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.739396423Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.739413498Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:40:33.739554 containerd[1987]: time="2026-03-13T00:40:33.739452069Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:40:33.739935 containerd[1987]: time="2026-03-13T00:40:33.739865092Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:40:33.739973 containerd[1987]: time="2026-03-13T00:40:33.739949430Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:40:33.746452 containerd[1987]: time="2026-03-13T00:40:33.746402903Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:40:33.746574 containerd[1987]: time="2026-03-13T00:40:33.746484277Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:40:33.746574 containerd[1987]: time="2026-03-13T00:40:33.746512154Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:40:33.746574 containerd[1987]: time="2026-03-13T00:40:33.746529954Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:40:33.746574 containerd[1987]: time="2026-03-13T00:40:33.746546501Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:40:33.746574 containerd[1987]: time="2026-03-13T00:40:33.746560247Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:40:33.746756 containerd[1987]: time="2026-03-13T00:40:33.746580170Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:40:33.746756 containerd[1987]: time="2026-03-13T00:40:33.746598011Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:40:33.746756 containerd[1987]: time="2026-03-13T00:40:33.746614906Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:40:33.746756 containerd[1987]: time="2026-03-13T00:40:33.746635140Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:40:33.746756 containerd[1987]: time="2026-03-13T00:40:33.746649329Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:40:33.746756 containerd[1987]: time="2026-03-13T00:40:33.746681966Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746842708Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746867152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746887586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746905038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746927533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746943812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:40:33.746963 containerd[1987]: time="2026-03-13T00:40:33.746960060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.746974908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.746992736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.747008694Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.747023654Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.747111747Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.747130709Z" level=info msg="Start snapshots syncer" Mar 13 00:40:33.747189 containerd[1987]: time="2026-03-13T00:40:33.747151125Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:40:33.750990 containerd[1987]: time="2026-03-13T00:40:33.747496070Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:40:33.750990 containerd[1987]: time="2026-03-13T00:40:33.747561607Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:40:33.751272 containerd[1987]: time="2026-03-13T00:40:33.747625700Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.753859693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.753921139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.753941179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.753957600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.753978881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754017802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754052159Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754104161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754121518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754137923Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754202163Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754275894Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:40:33.754411 containerd[1987]: time="2026-03-13T00:40:33.754293095Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754310013Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754323919Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754339936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754369438Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754393538Z" level=info msg="runtime interface created" Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754401875Z" level=info msg="created NRI interface" Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754412794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754431888Z" level=info msg="Connect containerd service" Mar 13 00:40:33.754954 containerd[1987]: time="2026-03-13T00:40:33.754462652Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:40:33.755551 containerd[1987]: time="2026-03-13T00:40:33.755396395Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:40:33.798048 systemd-coredump[2051]: Process 1958 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1958: #0 0x000055c4d15c8aeb n/a (ntpd + 0x68aeb) #1 0x000055c4d1571cdf n/a (ntpd + 0x11cdf) #2 0x000055c4d1572575 n/a (ntpd + 0x12575) #3 0x000055c4d156dd8a n/a (ntpd + 0xdd8a) #4 0x000055c4d156f5d3 n/a (ntpd + 0xf5d3) #5 0x000055c4d1577fd1 n/a (ntpd + 0x17fd1) #6 0x000055c4d1568c2d n/a (ntpd + 0x8c2d) #7 0x00007efef57f016c n/a (libc.so.6 + 0x2716c) #8 0x00007efef57f0229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055c4d1568c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 13 00:40:33.801399 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 13 00:40:33.801572 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 13 00:40:33.811140 systemd[1]: systemd-coredump@0-2041-0.service: Deactivated successfully. Mar 13 00:40:33.961071 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 13 00:40:33.968880 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:40:33.974129 polkitd[2154]: Started polkitd version 126 Mar 13 00:40:33.987112 polkitd[2154]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:40:33.990890 polkitd[2154]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:40:33.991079 polkitd[2154]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:40:33.992162 polkitd[2154]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:40:33.992292 polkitd[2154]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:40:33.992401 polkitd[2154]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:40:33.993740 polkitd[2154]: Finished loading, compiling and executing 2 rules Mar 13 00:40:33.994163 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:40:34.009856 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:40:34.012036 polkitd[2154]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:40:34.051646 systemd-hostnamed[2038]: Hostname set to (transient) Mar 13 00:40:34.052507 systemd-resolved[1841]: System hostname changed to 'ip-172-31-30-203'. Mar 13 00:40:34.059165 ntpd[2174]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: ---------------------------------------------------- Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: corporation. Support and training for ntp-4 are Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: available at https://www.nwtime.org/support Mar 13 00:40:34.059750 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: ---------------------------------------------------- Mar 13 00:40:34.059245 ntpd[2174]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:40:34.059255 ntpd[2174]: ---------------------------------------------------- Mar 13 00:40:34.059264 ntpd[2174]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:40:34.059274 ntpd[2174]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:40:34.059283 ntpd[2174]: corporation. Support and training for ntp-4 are Mar 13 00:40:34.059291 ntpd[2174]: available at https://www.nwtime.org/support Mar 13 00:40:34.059300 ntpd[2174]: ---------------------------------------------------- Mar 13 00:40:34.064086 ntpd[2174]: proto: precision = 0.092 usec (-23) Mar 13 00:40:34.064278 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: proto: precision = 0.092 usec (-23) Mar 13 00:40:34.064348 ntpd[2174]: basedate set to 2026-02-28 Mar 13 00:40:34.064763 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: basedate set to 2026-02-28 Mar 13 00:40:34.064763 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: gps base set to 2026-03-01 (week 2408) Mar 13 00:40:34.064763 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:40:34.064763 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:40:34.064366 ntpd[2174]: gps base set to 2026-03-01 (week 2408) Mar 13 00:40:34.064463 ntpd[2174]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:40:34.064490 ntpd[2174]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:40:34.064667 ntpd[2174]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:40:34.066881 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:40:34.066881 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Listen normally on 3 eth0 172.31.30.203:123 Mar 13 00:40:34.066881 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: Listen normally on 4 lo [::1]:123 Mar 13 00:40:34.066881 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: bind(21) AF_INET6 [fe80::46f:22ff:fe88:9435%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:40:34.066881 ntpd[2174]: 13 Mar 00:40:34 ntpd[2174]: unable to create socket on eth0 (5) for [fe80::46f:22ff:fe88:9435%2]:123 Mar 13 00:40:34.066746 ntpd[2174]: Listen normally on 3 eth0 172.31.30.203:123 Mar 13 00:40:34.066779 ntpd[2174]: Listen normally on 4 lo [::1]:123 Mar 13 00:40:34.066810 ntpd[2174]: bind(21) AF_INET6 [fe80::46f:22ff:fe88:9435%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:40:34.066831 ntpd[2174]: unable to create socket on eth0 (5) for [fe80::46f:22ff:fe88:9435%2]:123 Mar 13 00:40:34.068205 kernel: ntpd[2174]: segfault at 24 ip 000055f4b4063aeb sp 00007ffc4855d810 error 4 in ntpd[68aeb,55f4b4001000+80000] likely on CPU 0 (core 0, socket 0) Mar 13 00:40:34.071498 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 13 00:40:34.096408 systemd-coredump[2188]: Process 2174 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 13 00:40:34.105018 systemd[1]: Started systemd-coredump@1-2188-0.service - Process Core Dump (PID 2188/UID 0). Mar 13 00:40:34.128028 systemd-networkd[1838]: eth0: Gained IPv6LL Mar 13 00:40:34.131162 sshd_keygen[2003]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:40:34.133275 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.132785697Z" level=info msg="Start subscribing containerd event" Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.132842954Z" level=info msg="Start recovering state" Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.132953589Z" level=info msg="Start event monitor" Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.132970184Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.132981450Z" level=info msg="Start streaming server" Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.132994875Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.133004372Z" level=info msg="runtime interface starting up..." Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.133013819Z" level=info msg="starting plugins..." Mar 13 00:40:34.134456 containerd[1987]: time="2026-03-13T00:40:34.133032430Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:40:34.137889 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:40:34.141700 containerd[1987]: time="2026-03-13T00:40:34.140662570Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:40:34.141700 containerd[1987]: time="2026-03-13T00:40:34.140805662Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:40:34.144051 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 13 00:40:34.149488 containerd[1987]: time="2026-03-13T00:40:34.148293611Z" level=info msg="containerd successfully booted in 0.494899s" Mar 13 00:40:34.152279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:34.160788 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:40:34.162260 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:40:34.224500 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:40:34.227440 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:40:34.245184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:40:34.252822 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:40:34.253132 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:40:34.256398 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:40:34.303589 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:40:34.310607 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:40:34.318606 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:40:34.320870 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:40:34.362919 amazon-ssm-agent[2196]: Initializing new seelog logger Mar 13 00:40:34.363418 amazon-ssm-agent[2196]: New Seelog Logger Creation Complete Mar 13 00:40:34.363546 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.363546 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.364431 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 processing appconfig overrides Mar 13 00:40:34.364516 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.364516 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.364584 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 processing appconfig overrides Mar 13 00:40:34.364921 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.364921 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.365010 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 processing appconfig overrides Mar 13 00:40:34.365759 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.3644 INFO Proxy environment variables: Mar 13 00:40:34.369284 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.369284 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:34.369384 amazon-ssm-agent[2196]: 2026/03/13 00:40:34 processing appconfig overrides Mar 13 00:40:34.383889 systemd-coredump[2189]: Process 2174 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2174: #0 0x000055f4b4063aeb n/a (ntpd + 0x68aeb) #1 0x000055f4b400ccdf n/a (ntpd + 0x11cdf) #2 0x000055f4b400d575 n/a (ntpd + 0x12575) #3 0x000055f4b4008d8a n/a (ntpd + 0xdd8a) #4 0x000055f4b400a5d3 n/a (ntpd + 0xf5d3) #5 0x000055f4b4012fd1 n/a (ntpd + 0x17fd1) #6 0x000055f4b4003c2d n/a (ntpd + 0x8c2d) #7 0x00007ff83dd1916c n/a (libc.so.6 + 0x2716c) #8 0x00007ff83dd19229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055f4b4003c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 13 00:40:34.388140 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 13 00:40:34.388338 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 13 00:40:34.392587 systemd[1]: systemd-coredump@1-2188-0.service: Deactivated successfully. Mar 13 00:40:34.466959 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.3644 INFO http_proxy: Mar 13 00:40:34.538597 tar[1972]: linux-amd64/README.md Mar 13 00:40:34.541212 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Mar 13 00:40:34.548328 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:40:34.565814 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.3644 INFO no_proxy: Mar 13 00:40:34.579380 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:40:34.592183 ntpd[2236]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:40:34.592739 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:40:34.592942 ntpd[2236]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:40:34.593107 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:40:34.593153 ntpd[2236]: ---------------------------------------------------- Mar 13 00:40:34.594796 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: ---------------------------------------------------- Mar 13 00:40:34.594796 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:40:34.594796 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:40:34.594796 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: corporation. Support and training for ntp-4 are Mar 13 00:40:34.594796 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: available at https://www.nwtime.org/support Mar 13 00:40:34.594796 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: ---------------------------------------------------- Mar 13 00:40:34.593164 ntpd[2236]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:40:34.593173 ntpd[2236]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:40:34.593183 ntpd[2236]: corporation. Support and training for ntp-4 are Mar 13 00:40:34.593192 ntpd[2236]: available at https://www.nwtime.org/support Mar 13 00:40:34.593201 ntpd[2236]: ---------------------------------------------------- Mar 13 00:40:34.595903 ntpd[2236]: proto: precision = 0.086 usec (-23) Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: proto: precision = 0.086 usec (-23) Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: basedate set to 2026-02-28 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: gps base set to 2026-03-01 (week 2408) Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listen normally on 3 eth0 172.31.30.203:123 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listen normally on 4 lo [::1]:123 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listen normally on 5 eth0 [fe80::46f:22ff:fe88:9435%2]:123 Mar 13 00:40:34.597459 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: Listening on routing socket on fd #22 for interface updates Mar 13 00:40:34.596153 ntpd[2236]: basedate set to 2026-02-28 Mar 13 00:40:34.596165 ntpd[2236]: gps base set to 2026-03-01 (week 2408) Mar 13 00:40:34.596261 ntpd[2236]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:40:34.596286 ntpd[2236]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:40:34.596456 ntpd[2236]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:40:34.596480 ntpd[2236]: Listen normally on 3 eth0 172.31.30.203:123 Mar 13 00:40:34.596506 ntpd[2236]: Listen normally on 4 lo [::1]:123 Mar 13 00:40:34.596531 ntpd[2236]: Listen normally on 5 eth0 [fe80::46f:22ff:fe88:9435%2]:123 Mar 13 00:40:34.596555 ntpd[2236]: Listening on routing socket on fd #22 for interface updates Mar 13 00:40:34.603720 ntpd[2236]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:40:34.604580 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:40:34.604580 ntpd[2236]: 13 Mar 00:40:34 ntpd[2236]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:40:34.603753 ntpd[2236]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:40:34.663194 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.3644 INFO https_proxy: Mar 13 00:40:34.761376 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.3645 INFO Checking if agent identity type OnPrem can be assumed Mar 13 00:40:34.860668 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.3648 INFO Checking if agent identity type EC2 can be assumed Mar 13 00:40:34.960410 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4462 INFO Agent will take identity from EC2 Mar 13 00:40:35.059584 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4476 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 13 00:40:35.060690 amazon-ssm-agent[2196]: 2026/03/13 00:40:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:35.060690 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:40:35.060870 amazon-ssm-agent[2196]: 2026/03/13 00:40:35 processing appconfig overrides Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4476 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4476 INFO [amazon-ssm-agent] Starting Core Agent Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4476 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4476 INFO [Registrar] Starting registrar module Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4507 INFO [EC2Identity] Checking disk for registration info Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4507 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:34.4508 INFO [EC2Identity] Generating registration keypair Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0269 INFO [EC2Identity] Checking write access before registering Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0273 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0604 INFO [EC2Identity] EC2 registration was successful. Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0605 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0605 INFO [CredentialRefresher] credentialRefresher has started Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0606 INFO [CredentialRefresher] Starting credentials refresher loop Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0903 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 13 00:40:35.091123 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0907 INFO [CredentialRefresher] Credentials ready Mar 13 00:40:35.158641 amazon-ssm-agent[2196]: 2026-03-13 00:40:35.0909 INFO [CredentialRefresher] Next credential rotation will be in 29.99999045295 minutes Mar 13 00:40:35.652485 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:40:35.654807 systemd[1]: Started sshd@0-172.31.30.203:22-20.161.92.111:41658.service - OpenSSH per-connection server daemon (20.161.92.111:41658). Mar 13 00:40:36.102866 amazon-ssm-agent[2196]: 2026-03-13 00:40:36.1027 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 13 00:40:36.142486 sshd[2242]: Accepted publickey for core from 20.161.92.111 port 41658 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:36.146267 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:36.168092 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:40:36.174789 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:40:36.183644 systemd-logind[1966]: New session 1 of user core. Mar 13 00:40:36.202745 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:40:36.203792 amazon-ssm-agent[2196]: 2026-03-13 00:40:36.1052 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2247) started Mar 13 00:40:36.208694 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:40:36.225365 (systemd)[2254]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:40:36.230251 systemd-logind[1966]: New session c1 of user core. Mar 13 00:40:36.306021 amazon-ssm-agent[2196]: 2026-03-13 00:40:36.1053 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 13 00:40:36.380425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:36.383597 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:40:36.400180 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:40:36.450426 systemd[2254]: Queued start job for default target default.target. Mar 13 00:40:36.458240 systemd[2254]: Created slice app.slice - User Application Slice. Mar 13 00:40:36.458286 systemd[2254]: Reached target paths.target - Paths. Mar 13 00:40:36.458966 systemd[2254]: Reached target timers.target - Timers. Mar 13 00:40:36.460847 systemd[2254]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:40:36.483409 systemd[2254]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:40:36.484584 systemd[2254]: Reached target sockets.target - Sockets. Mar 13 00:40:36.484740 systemd[2254]: Reached target basic.target - Basic System. Mar 13 00:40:36.484843 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:40:36.485328 systemd[2254]: Reached target default.target - Main User Target. Mar 13 00:40:36.485379 systemd[2254]: Startup finished in 243ms. Mar 13 00:40:36.496944 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:40:36.499529 systemd[1]: Startup finished in 2.615s (kernel) + 7.047s (initrd) + 7.874s (userspace) = 17.537s. Mar 13 00:40:36.748522 systemd[1]: Started sshd@1-172.31.30.203:22-20.161.92.111:41664.service - OpenSSH per-connection server daemon (20.161.92.111:41664). Mar 13 00:40:37.186655 sshd[2286]: Accepted publickey for core from 20.161.92.111 port 41664 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:37.188727 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:37.195623 systemd-logind[1966]: New session 2 of user core. Mar 13 00:40:37.199886 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:40:37.331352 kubelet[2271]: E0313 00:40:37.331283 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:40:37.334096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:40:37.334292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:40:37.334659 systemd[1]: kubelet.service: Consumed 994ms CPU time, 254.4M memory peak. Mar 13 00:40:37.422840 sshd[2289]: Connection closed by 20.161.92.111 port 41664 Mar 13 00:40:37.423956 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:37.428777 systemd-logind[1966]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:40:37.429723 systemd[1]: sshd@1-172.31.30.203:22-20.161.92.111:41664.service: Deactivated successfully. Mar 13 00:40:37.431604 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:40:37.433581 systemd-logind[1966]: Removed session 2. Mar 13 00:40:37.537470 systemd[1]: Started sshd@2-172.31.30.203:22-20.161.92.111:41674.service - OpenSSH per-connection server daemon (20.161.92.111:41674). Mar 13 00:40:37.977757 sshd[2298]: Accepted publickey for core from 20.161.92.111 port 41674 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:37.978431 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:37.984416 systemd-logind[1966]: New session 3 of user core. Mar 13 00:40:37.989899 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:40:38.207765 sshd[2301]: Connection closed by 20.161.92.111 port 41674 Mar 13 00:40:38.208926 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:38.213579 systemd[1]: sshd@2-172.31.30.203:22-20.161.92.111:41674.service: Deactivated successfully. Mar 13 00:40:38.215630 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:40:38.216629 systemd-logind[1966]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:40:38.218520 systemd-logind[1966]: Removed session 3. Mar 13 00:40:38.296178 systemd[1]: Started sshd@3-172.31.30.203:22-20.161.92.111:41680.service - OpenSSH per-connection server daemon (20.161.92.111:41680). Mar 13 00:40:38.724844 sshd[2307]: Accepted publickey for core from 20.161.92.111 port 41680 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:38.726630 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:38.732656 systemd-logind[1966]: New session 4 of user core. Mar 13 00:40:38.736917 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:40:38.961690 sshd[2310]: Connection closed by 20.161.92.111 port 41680 Mar 13 00:40:38.962951 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:38.967598 systemd[1]: sshd@3-172.31.30.203:22-20.161.92.111:41680.service: Deactivated successfully. Mar 13 00:40:38.969363 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:40:38.970448 systemd-logind[1966]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:40:38.972142 systemd-logind[1966]: Removed session 4. Mar 13 00:40:39.054111 systemd[1]: Started sshd@4-172.31.30.203:22-20.161.92.111:49816.service - OpenSSH per-connection server daemon (20.161.92.111:49816). Mar 13 00:40:39.494815 sshd[2316]: Accepted publickey for core from 20.161.92.111 port 49816 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:39.496159 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:39.501294 systemd-logind[1966]: New session 5 of user core. Mar 13 00:40:39.509907 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:40:39.669749 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:40:39.670221 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:39.686239 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:39.764516 sshd[2319]: Connection closed by 20.161.92.111 port 49816 Mar 13 00:40:39.765407 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:39.770868 systemd[1]: sshd@4-172.31.30.203:22-20.161.92.111:49816.service: Deactivated successfully. Mar 13 00:40:39.772903 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:40:39.773781 systemd-logind[1966]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:40:39.775710 systemd-logind[1966]: Removed session 5. Mar 13 00:40:39.852800 systemd[1]: Started sshd@5-172.31.30.203:22-20.161.92.111:49832.service - OpenSSH per-connection server daemon (20.161.92.111:49832). Mar 13 00:40:40.284137 sshd[2326]: Accepted publickey for core from 20.161.92.111 port 49832 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:40.285542 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:40.291728 systemd-logind[1966]: New session 6 of user core. Mar 13 00:40:40.300914 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:40:40.443707 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:40:40.444068 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:40.449458 sudo[2331]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:40.455248 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:40:40.455608 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:40.465780 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:40:40.506635 augenrules[2353]: No rules Mar 13 00:40:40.507955 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:40:40.508168 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:40:40.509472 sudo[2330]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:40.587026 sshd[2329]: Connection closed by 20.161.92.111 port 49832 Mar 13 00:40:40.588849 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:40.592490 systemd[1]: sshd@5-172.31.30.203:22-20.161.92.111:49832.service: Deactivated successfully. Mar 13 00:40:40.594875 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:40:40.597188 systemd-logind[1966]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:40:40.598467 systemd-logind[1966]: Removed session 6. Mar 13 00:40:40.679581 systemd[1]: Started sshd@6-172.31.30.203:22-20.161.92.111:49840.service - OpenSSH per-connection server daemon (20.161.92.111:49840). Mar 13 00:40:41.111565 sshd[2362]: Accepted publickey for core from 20.161.92.111 port 49840 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:41.112907 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:41.118840 systemd-logind[1966]: New session 7 of user core. Mar 13 00:40:41.124956 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:40:41.271268 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:40:41.271683 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:42.930299 systemd-resolved[1841]: Clock change detected. Flushing caches. Mar 13 00:40:43.031261 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:40:43.043932 (dockerd)[2384]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:40:43.350877 dockerd[2384]: time="2026-03-13T00:40:43.350479103Z" level=info msg="Starting up" Mar 13 00:40:43.352833 dockerd[2384]: time="2026-03-13T00:40:43.352690484Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:40:43.366233 dockerd[2384]: time="2026-03-13T00:40:43.366140036Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:40:43.499164 dockerd[2384]: time="2026-03-13T00:40:43.499114292Z" level=info msg="Loading containers: start." Mar 13 00:40:43.511526 kernel: Initializing XFRM netlink socket Mar 13 00:40:43.739005 (udev-worker)[2405]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:40:43.784638 systemd-networkd[1838]: docker0: Link UP Mar 13 00:40:43.789771 dockerd[2384]: time="2026-03-13T00:40:43.789714955Z" level=info msg="Loading containers: done." Mar 13 00:40:43.806784 dockerd[2384]: time="2026-03-13T00:40:43.806733364Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:40:43.806963 dockerd[2384]: time="2026-03-13T00:40:43.806838063Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:40:43.806963 dockerd[2384]: time="2026-03-13T00:40:43.806942082Z" level=info msg="Initializing buildkit" Mar 13 00:40:43.832441 dockerd[2384]: time="2026-03-13T00:40:43.832374667Z" level=info msg="Completed buildkit initialization" Mar 13 00:40:43.841181 dockerd[2384]: time="2026-03-13T00:40:43.841118829Z" level=info msg="Daemon has completed initialization" Mar 13 00:40:43.841329 dockerd[2384]: time="2026-03-13T00:40:43.841181561Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:40:43.841596 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:40:44.721546 containerd[1987]: time="2026-03-13T00:40:44.721500497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 13 00:40:45.299112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375566569.mount: Deactivated successfully. Mar 13 00:40:46.744573 containerd[1987]: time="2026-03-13T00:40:46.744522054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.745837 containerd[1987]: time="2026-03-13T00:40:46.745789445Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 13 00:40:46.747156 containerd[1987]: time="2026-03-13T00:40:46.747098596Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.749638 containerd[1987]: time="2026-03-13T00:40:46.749585856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.750949 containerd[1987]: time="2026-03-13T00:40:46.750642406Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.029100513s" Mar 13 00:40:46.750949 containerd[1987]: time="2026-03-13T00:40:46.750683649Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 13 00:40:46.751379 containerd[1987]: time="2026-03-13T00:40:46.751330978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 13 00:40:48.298382 containerd[1987]: time="2026-03-13T00:40:48.298318753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:48.312633 containerd[1987]: time="2026-03-13T00:40:48.312575867Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 13 00:40:48.326764 containerd[1987]: time="2026-03-13T00:40:48.326675151Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:48.334933 containerd[1987]: time="2026-03-13T00:40:48.334861707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:48.336124 containerd[1987]: time="2026-03-13T00:40:48.335891258Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.584409338s" Mar 13 00:40:48.336124 containerd[1987]: time="2026-03-13T00:40:48.335931338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 13 00:40:48.336723 containerd[1987]: time="2026-03-13T00:40:48.336687545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 13 00:40:48.859151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:40:48.861213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:49.152597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:49.161818 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:40:49.230936 kubelet[2664]: E0313 00:40:49.230888 2664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:40:49.236049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:40:49.236251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:40:49.237585 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.7M memory peak. Mar 13 00:40:49.605456 containerd[1987]: time="2026-03-13T00:40:49.604900353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:49.606646 containerd[1987]: time="2026-03-13T00:40:49.606559827Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 13 00:40:49.615502 containerd[1987]: time="2026-03-13T00:40:49.615455345Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:49.619082 containerd[1987]: time="2026-03-13T00:40:49.619009483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:49.620204 containerd[1987]: time="2026-03-13T00:40:49.619987380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.283265044s" Mar 13 00:40:49.620204 containerd[1987]: time="2026-03-13T00:40:49.620024567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 13 00:40:49.620988 containerd[1987]: time="2026-03-13T00:40:49.620943581Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 13 00:40:50.781127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301381807.mount: Deactivated successfully. Mar 13 00:40:51.166731 containerd[1987]: time="2026-03-13T00:40:51.166674408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:51.167691 containerd[1987]: time="2026-03-13T00:40:51.167646331Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 13 00:40:51.169087 containerd[1987]: time="2026-03-13T00:40:51.169023805Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:51.171067 containerd[1987]: time="2026-03-13T00:40:51.171008473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:51.171753 containerd[1987]: time="2026-03-13T00:40:51.171713848Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.550732116s" Mar 13 00:40:51.171753 containerd[1987]: time="2026-03-13T00:40:51.171746981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 13 00:40:51.172283 containerd[1987]: time="2026-03-13T00:40:51.172251397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 13 00:40:51.683298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270629231.mount: Deactivated successfully. Mar 13 00:40:52.928602 containerd[1987]: time="2026-03-13T00:40:52.928546536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:52.929991 containerd[1987]: time="2026-03-13T00:40:52.929942941Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 13 00:40:52.931369 containerd[1987]: time="2026-03-13T00:40:52.930975699Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:52.933609 containerd[1987]: time="2026-03-13T00:40:52.933556408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:52.934902 containerd[1987]: time="2026-03-13T00:40:52.934734431Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.762415039s" Mar 13 00:40:52.934902 containerd[1987]: time="2026-03-13T00:40:52.934771268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 13 00:40:52.935723 containerd[1987]: time="2026-03-13T00:40:52.935539383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:40:53.381022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369371888.mount: Deactivated successfully. Mar 13 00:40:53.387444 containerd[1987]: time="2026-03-13T00:40:53.387396846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:53.388368 containerd[1987]: time="2026-03-13T00:40:53.388261696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 13 00:40:53.389672 containerd[1987]: time="2026-03-13T00:40:53.389619594Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:53.392125 containerd[1987]: time="2026-03-13T00:40:53.392073620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:53.393183 containerd[1987]: time="2026-03-13T00:40:53.392986226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 457.414698ms" Mar 13 00:40:53.393183 containerd[1987]: time="2026-03-13T00:40:53.393022679Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:40:53.393915 containerd[1987]: time="2026-03-13T00:40:53.393890011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 13 00:40:53.874804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393432955.mount: Deactivated successfully. Mar 13 00:40:54.823893 containerd[1987]: time="2026-03-13T00:40:54.823832278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:54.825270 containerd[1987]: time="2026-03-13T00:40:54.825012529Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 13 00:40:54.826471 containerd[1987]: time="2026-03-13T00:40:54.826428667Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:54.829240 containerd[1987]: time="2026-03-13T00:40:54.829193398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:54.830988 containerd[1987]: time="2026-03-13T00:40:54.830408042Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.436488233s" Mar 13 00:40:54.830988 containerd[1987]: time="2026-03-13T00:40:54.830453202Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 13 00:40:56.176114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:56.176384 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.7M memory peak. Mar 13 00:40:56.179129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:56.218234 systemd[1]: Reload requested from client PID 2830 ('systemctl') (unit session-7.scope)... Mar 13 00:40:56.218251 systemd[1]: Reloading... Mar 13 00:40:56.356362 zram_generator::config[2878]: No configuration found. Mar 13 00:40:56.635468 systemd[1]: Reloading finished in 416 ms. Mar 13 00:40:56.700641 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:40:56.700758 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:40:56.701568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:56.701651 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Mar 13 00:40:56.703586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:56.943926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:56.955874 (kubelet)[2939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:40:57.015109 kubelet[2939]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:40:57.186367 kubelet[2939]: I0313 00:40:57.184410 2939 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 13 00:40:57.186367 kubelet[2939]: I0313 00:40:57.184456 2939 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:40:57.186367 kubelet[2939]: I0313 00:40:57.184464 2939 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:40:57.186367 kubelet[2939]: I0313 00:40:57.184469 2939 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:40:57.186367 kubelet[2939]: I0313 00:40:57.184724 2939 server.go:951] "Client rotation is on, will bootstrap in background" Mar 13 00:40:57.198302 kubelet[2939]: E0313 00:40:57.197927 2939 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:40:57.200096 kubelet[2939]: I0313 00:40:57.199402 2939 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:40:57.205387 kubelet[2939]: I0313 00:40:57.205321 2939 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:40:57.213904 kubelet[2939]: I0313 00:40:57.213856 2939 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:40:57.222784 kubelet[2939]: I0313 00:40:57.222725 2939 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:40:57.224741 kubelet[2939]: I0313 00:40:57.222942 2939 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:40:57.225088 kubelet[2939]: I0313 00:40:57.225073 2939 topology_manager.go:143] "Creating topology manager with none policy" Mar 13 00:40:57.225145 kubelet[2939]: I0313 00:40:57.225139 2939 container_manager_linux.go:308] "Creating device plugin manager" Mar 13 00:40:57.225297 kubelet[2939]: I0313 00:40:57.225287 2939 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:40:57.227311 kubelet[2939]: I0313 00:40:57.227292 2939 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 13 00:40:57.227661 kubelet[2939]: I0313 00:40:57.227639 2939 kubelet.go:482] "Attempting to sync node with API server" Mar 13 00:40:57.227661 kubelet[2939]: I0313 00:40:57.227661 2939 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:40:57.227872 kubelet[2939]: I0313 00:40:57.227792 2939 kubelet.go:394] "Adding apiserver pod source" Mar 13 00:40:57.227872 kubelet[2939]: I0313 00:40:57.227809 2939 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:40:57.233131 kubelet[2939]: I0313 00:40:57.233100 2939 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:40:57.236688 kubelet[2939]: I0313 00:40:57.236499 2939 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:40:57.236688 kubelet[2939]: I0313 00:40:57.236557 2939 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:40:57.239355 kubelet[2939]: W0313 00:40:57.238717 2939 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:40:57.241501 kubelet[2939]: I0313 00:40:57.241482 2939 server.go:1257] "Started kubelet" Mar 13 00:40:57.243938 kubelet[2939]: I0313 00:40:57.243118 2939 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 13 00:40:57.253824 kubelet[2939]: E0313 00:40:57.251500 2939 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.203:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-203.189c3fc259e3cfbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-203,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-203,},FirstTimestamp:2026-03-13 00:40:57.241440191 +0000 UTC m=+0.278577236,LastTimestamp:2026-03-13 00:40:57.241440191 +0000 UTC m=+0.278577236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-203,}" Mar 13 00:40:57.254433 kubelet[2939]: I0313 00:40:57.253877 2939 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:40:57.255571 kubelet[2939]: I0313 00:40:57.255443 2939 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:40:57.256365 kubelet[2939]: I0313 00:40:57.256285 2939 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:40:57.256498 kubelet[2939]: I0313 00:40:57.256483 2939 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:40:57.257057 kubelet[2939]: I0313 00:40:57.257037 2939 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:40:57.258956 kubelet[2939]: I0313 00:40:57.258933 2939 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:40:57.259515 kubelet[2939]: I0313 00:40:57.259487 2939 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 13 00:40:57.259756 kubelet[2939]: E0313 00:40:57.259736 2939 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-30-203\" not found" Mar 13 00:40:57.262723 kubelet[2939]: E0313 00:40:57.262685 2939 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-203?timeout=10s\": dial tcp 172.31.30.203:6443: connect: connection refused" interval="200ms" Mar 13 00:40:57.263588 kubelet[2939]: I0313 00:40:57.263026 2939 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:40:57.263588 kubelet[2939]: I0313 00:40:57.263074 2939 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:40:57.264198 kubelet[2939]: I0313 00:40:57.264179 2939 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:40:57.264421 kubelet[2939]: I0313 00:40:57.264403 2939 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:40:57.266528 kubelet[2939]: E0313 00:40:57.266508 2939 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:40:57.267019 kubelet[2939]: I0313 00:40:57.267002 2939 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:40:57.280290 kubelet[2939]: I0313 00:40:57.280223 2939 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:40:57.282175 kubelet[2939]: I0313 00:40:57.282019 2939 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:40:57.282175 kubelet[2939]: I0313 00:40:57.282049 2939 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 13 00:40:57.282175 kubelet[2939]: I0313 00:40:57.282078 2939 kubelet.go:2501] "Starting kubelet main sync loop" Mar 13 00:40:57.282175 kubelet[2939]: E0313 00:40:57.282143 2939 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:40:57.303098 kubelet[2939]: I0313 00:40:57.303060 2939 cpu_manager.go:225] "Starting" policy="none" Mar 13 00:40:57.303098 kubelet[2939]: I0313 00:40:57.303090 2939 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 00:40:57.303269 kubelet[2939]: I0313 00:40:57.303112 2939 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 13 00:40:57.305670 kubelet[2939]: I0313 00:40:57.305637 2939 policy_none.go:50] "Start" Mar 13 00:40:57.305670 kubelet[2939]: I0313 00:40:57.305657 2939 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:40:57.305670 kubelet[2939]: I0313 00:40:57.305671 2939 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:40:57.307485 kubelet[2939]: I0313 00:40:57.307447 2939 policy_none.go:44] "Start" Mar 13 00:40:57.312774 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:40:57.332382 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:40:57.337747 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:40:57.345566 kubelet[2939]: E0313 00:40:57.345525 2939 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:40:57.345819 kubelet[2939]: I0313 00:40:57.345792 2939 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 13 00:40:57.345906 kubelet[2939]: I0313 00:40:57.345813 2939 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:40:57.346990 kubelet[2939]: I0313 00:40:57.346464 2939 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 13 00:40:57.348554 kubelet[2939]: E0313 00:40:57.348531 2939 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:40:57.348631 kubelet[2939]: E0313 00:40:57.348577 2939 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-203\" not found" Mar 13 00:40:57.398346 systemd[1]: Created slice kubepods-burstable-pod20277f93df21e4320c48b84bf179d424.slice - libcontainer container kubepods-burstable-pod20277f93df21e4320c48b84bf179d424.slice. Mar 13 00:40:57.412739 kubelet[2939]: E0313 00:40:57.412514 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:40:57.416687 systemd[1]: Created slice kubepods-burstable-podff56fcbd03dc6078ddf5a88b7cbde198.slice - libcontainer container kubepods-burstable-podff56fcbd03dc6078ddf5a88b7cbde198.slice. Mar 13 00:40:57.437275 kubelet[2939]: E0313 00:40:57.437244 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:40:57.440356 systemd[1]: Created slice kubepods-burstable-pod9a04eb4a478bd8ad44d8cdd9274da08e.slice - libcontainer container kubepods-burstable-pod9a04eb4a478bd8ad44d8cdd9274da08e.slice. Mar 13 00:40:57.442681 kubelet[2939]: E0313 00:40:57.442655 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:40:57.448145 kubelet[2939]: I0313 00:40:57.448107 2939 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-30-203" Mar 13 00:40:57.448653 kubelet[2939]: E0313 00:40:57.448556 2939 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.30.203:6443/api/v1/nodes\": dial tcp 172.31.30.203:6443: connect: connection refused" node="ip-172-31-30-203" Mar 13 00:40:57.464304 kubelet[2939]: E0313 00:40:57.464252 2939 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-203?timeout=10s\": dial tcp 172.31.30.203:6443: connect: connection refused" interval="400ms" Mar 13 00:40:57.564356 kubelet[2939]: I0313 00:40:57.564299 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:40:57.564514 kubelet[2939]: I0313 00:40:57.564367 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:40:57.564514 kubelet[2939]: I0313 00:40:57.564398 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:40:57.564514 kubelet[2939]: I0313 00:40:57.564418 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:40:57.564514 kubelet[2939]: I0313 00:40:57.564438 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20277f93df21e4320c48b84bf179d424-ca-certs\") pod \"kube-apiserver-ip-172-31-30-203\" (UID: \"20277f93df21e4320c48b84bf179d424\") " pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:40:57.564514 kubelet[2939]: I0313 00:40:57.564456 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20277f93df21e4320c48b84bf179d424-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-203\" (UID: \"20277f93df21e4320c48b84bf179d424\") " pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:40:57.564693 kubelet[2939]: I0313 00:40:57.564478 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20277f93df21e4320c48b84bf179d424-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-203\" (UID: \"20277f93df21e4320c48b84bf179d424\") " pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:40:57.564693 kubelet[2939]: I0313 00:40:57.564500 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:40:57.564693 kubelet[2939]: I0313 00:40:57.564526 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a04eb4a478bd8ad44d8cdd9274da08e-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-203\" (UID: \"9a04eb4a478bd8ad44d8cdd9274da08e\") " pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:40:57.650987 kubelet[2939]: I0313 00:40:57.650955 2939 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-30-203" Mar 13 00:40:57.651351 kubelet[2939]: E0313 00:40:57.651304 2939 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.30.203:6443/api/v1/nodes\": dial tcp 172.31.30.203:6443: connect: connection refused" node="ip-172-31-30-203" Mar 13 00:40:57.717204 containerd[1987]: time="2026-03-13T00:40:57.716322656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-203,Uid:20277f93df21e4320c48b84bf179d424,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:57.740210 containerd[1987]: time="2026-03-13T00:40:57.740168069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-203,Uid:ff56fcbd03dc6078ddf5a88b7cbde198,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:57.746004 containerd[1987]: time="2026-03-13T00:40:57.745964064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-203,Uid:9a04eb4a478bd8ad44d8cdd9274da08e,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:57.865481 kubelet[2939]: E0313 00:40:57.865429 2939 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-203?timeout=10s\": dial tcp 172.31.30.203:6443: connect: connection refused" interval="800ms" Mar 13 00:40:58.053701 kubelet[2939]: I0313 00:40:58.053283 2939 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-30-203" Mar 13 00:40:58.054231 kubelet[2939]: E0313 00:40:58.053787 2939 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.30.203:6443/api/v1/nodes\": dial tcp 172.31.30.203:6443: connect: connection refused" node="ip-172-31-30-203" Mar 13 00:40:58.109244 kubelet[2939]: E0313 00:40:58.109126 2939 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.203:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-203.189c3fc259e3cfbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-203,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-203,},FirstTimestamp:2026-03-13 00:40:57.241440191 +0000 UTC m=+0.278577236,LastTimestamp:2026-03-13 00:40:57.241440191 +0000 UTC m=+0.278577236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-203,}" Mar 13 00:40:58.298583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18592456.mount: Deactivated successfully. Mar 13 00:40:58.306007 containerd[1987]: time="2026-03-13T00:40:58.305891073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:40:58.309669 containerd[1987]: time="2026-03-13T00:40:58.309623636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:40:58.310539 containerd[1987]: time="2026-03-13T00:40:58.310499659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:40:58.311655 containerd[1987]: time="2026-03-13T00:40:58.311616136Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:40:58.313511 containerd[1987]: time="2026-03-13T00:40:58.313474985Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:40:58.314406 containerd[1987]: time="2026-03-13T00:40:58.314373842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:40:58.315283 containerd[1987]: time="2026-03-13T00:40:58.315243363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:40:58.316457 containerd[1987]: time="2026-03-13T00:40:58.316425700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:40:58.317290 containerd[1987]: time="2026-03-13T00:40:58.317261669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 598.971092ms" Mar 13 00:40:58.319243 containerd[1987]: time="2026-03-13T00:40:58.319201900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 577.283083ms" Mar 13 00:40:58.322980 containerd[1987]: time="2026-03-13T00:40:58.322937649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.636908ms" Mar 13 00:40:58.373130 containerd[1987]: time="2026-03-13T00:40:58.372912616Z" level=info msg="connecting to shim 1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa" address="unix:///run/containerd/s/fd890ef73dc30ee0c35ecee6b34b52f8e2c8fbc00621a19b9ea2c9ed72cbe5a7" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:58.377950 containerd[1987]: time="2026-03-13T00:40:58.377886236Z" level=info msg="connecting to shim f63cd655ec4d811e3c297a4c6fba811acb472351474e0bcc9a300202f3a70486" address="unix:///run/containerd/s/9398e212167ad46fb92668e81b2ced3806921a838441831901c6b3c7c19de213" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:58.392289 containerd[1987]: time="2026-03-13T00:40:58.392211930Z" level=info msg="connecting to shim 24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3" address="unix:///run/containerd/s/2c1d23340290841893d3903d4b87dd9b13d0b8ec10c309e66300616961b76ae7" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:58.466563 systemd[1]: Started cri-containerd-f63cd655ec4d811e3c297a4c6fba811acb472351474e0bcc9a300202f3a70486.scope - libcontainer container f63cd655ec4d811e3c297a4c6fba811acb472351474e0bcc9a300202f3a70486. Mar 13 00:40:58.480578 systemd[1]: Started cri-containerd-1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa.scope - libcontainer container 1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa. Mar 13 00:40:58.483588 systemd[1]: Started cri-containerd-24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3.scope - libcontainer container 24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3. Mar 13 00:40:58.590541 containerd[1987]: time="2026-03-13T00:40:58.590216033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-203,Uid:20277f93df21e4320c48b84bf179d424,Namespace:kube-system,Attempt:0,} returns sandbox id \"f63cd655ec4d811e3c297a4c6fba811acb472351474e0bcc9a300202f3a70486\"" Mar 13 00:40:58.601526 containerd[1987]: time="2026-03-13T00:40:58.601481549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-203,Uid:ff56fcbd03dc6078ddf5a88b7cbde198,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa\"" Mar 13 00:40:58.603704 containerd[1987]: time="2026-03-13T00:40:58.601983447Z" level=info msg="CreateContainer within sandbox \"f63cd655ec4d811e3c297a4c6fba811acb472351474e0bcc9a300202f3a70486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:40:58.617215 containerd[1987]: time="2026-03-13T00:40:58.616587900Z" level=info msg="CreateContainer within sandbox \"1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:40:58.618691 containerd[1987]: time="2026-03-13T00:40:58.618654136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-203,Uid:9a04eb4a478bd8ad44d8cdd9274da08e,Namespace:kube-system,Attempt:0,} returns sandbox id \"24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3\"" Mar 13 00:40:58.623841 containerd[1987]: time="2026-03-13T00:40:58.623805955Z" level=info msg="CreateContainer within sandbox \"24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:40:58.632642 containerd[1987]: time="2026-03-13T00:40:58.632566593Z" level=info msg="Container 2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:58.635454 containerd[1987]: time="2026-03-13T00:40:58.635199644Z" level=info msg="Container 9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:58.637369 containerd[1987]: time="2026-03-13T00:40:58.637311492Z" level=info msg="Container d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:58.642306 containerd[1987]: time="2026-03-13T00:40:58.642257652Z" level=info msg="CreateContainer within sandbox \"f63cd655ec4d811e3c297a4c6fba811acb472351474e0bcc9a300202f3a70486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f\"" Mar 13 00:40:58.643439 containerd[1987]: time="2026-03-13T00:40:58.643374006Z" level=info msg="StartContainer for \"2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f\"" Mar 13 00:40:58.647214 containerd[1987]: time="2026-03-13T00:40:58.647165322Z" level=info msg="connecting to shim 2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f" address="unix:///run/containerd/s/9398e212167ad46fb92668e81b2ced3806921a838441831901c6b3c7c19de213" protocol=ttrpc version=3 Mar 13 00:40:58.651402 containerd[1987]: time="2026-03-13T00:40:58.651211890Z" level=info msg="CreateContainer within sandbox \"1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1\"" Mar 13 00:40:58.653500 containerd[1987]: time="2026-03-13T00:40:58.652982570Z" level=info msg="StartContainer for \"9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1\"" Mar 13 00:40:58.654403 containerd[1987]: time="2026-03-13T00:40:58.654364259Z" level=info msg="connecting to shim 9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1" address="unix:///run/containerd/s/fd890ef73dc30ee0c35ecee6b34b52f8e2c8fbc00621a19b9ea2c9ed72cbe5a7" protocol=ttrpc version=3 Mar 13 00:40:58.656858 containerd[1987]: time="2026-03-13T00:40:58.656687066Z" level=info msg="CreateContainer within sandbox \"24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e\"" Mar 13 00:40:58.657996 containerd[1987]: time="2026-03-13T00:40:58.657963297Z" level=info msg="StartContainer for \"d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e\"" Mar 13 00:40:58.662100 containerd[1987]: time="2026-03-13T00:40:58.662060507Z" level=info msg="connecting to shim d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e" address="unix:///run/containerd/s/2c1d23340290841893d3903d4b87dd9b13d0b8ec10c309e66300616961b76ae7" protocol=ttrpc version=3 Mar 13 00:40:58.667262 kubelet[2939]: E0313 00:40:58.667177 2939 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-203?timeout=10s\": dial tcp 172.31.30.203:6443: connect: connection refused" interval="1.6s" Mar 13 00:40:58.700655 systemd[1]: Started cri-containerd-d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e.scope - libcontainer container d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e. Mar 13 00:40:58.715137 systemd[1]: Started cri-containerd-2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f.scope - libcontainer container 2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f. Mar 13 00:40:58.722732 systemd[1]: Started cri-containerd-9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1.scope - libcontainer container 9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1. Mar 13 00:40:58.808865 containerd[1987]: time="2026-03-13T00:40:58.808142656Z" level=info msg="StartContainer for \"d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e\" returns successfully" Mar 13 00:40:58.856176 containerd[1987]: time="2026-03-13T00:40:58.855694020Z" level=info msg="StartContainer for \"9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1\" returns successfully" Mar 13 00:40:58.857515 containerd[1987]: time="2026-03-13T00:40:58.857483371Z" level=info msg="StartContainer for \"2baac9d06e0df227a93b9699db910c6fe222ca3179bd2fad8a5011ab80bcdd1f\" returns successfully" Mar 13 00:40:58.859726 kubelet[2939]: I0313 00:40:58.859700 2939 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-30-203" Mar 13 00:40:58.860219 kubelet[2939]: E0313 00:40:58.860188 2939 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.30.203:6443/api/v1/nodes\": dial tcp 172.31.30.203:6443: connect: connection refused" node="ip-172-31-30-203" Mar 13 00:40:59.320739 kubelet[2939]: E0313 00:40:59.320463 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:40:59.330952 kubelet[2939]: E0313 00:40:59.330896 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:40:59.333763 kubelet[2939]: E0313 00:40:59.333521 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:41:00.332892 kubelet[2939]: E0313 00:41:00.332786 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:41:00.333435 kubelet[2939]: E0313 00:41:00.333305 2939 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:41:00.462617 kubelet[2939]: I0313 00:41:00.462589 2939 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-30-203" Mar 13 00:41:00.679861 kubelet[2939]: E0313 00:41:00.679817 2939 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-203\" not found" node="ip-172-31-30-203" Mar 13 00:41:00.754661 kubelet[2939]: I0313 00:41:00.754626 2939 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-30-203" Mar 13 00:41:00.764203 kubelet[2939]: I0313 00:41:00.762252 2939 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:00.825099 kubelet[2939]: E0313 00:41:00.825060 2939 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:00.825252 kubelet[2939]: I0313 00:41:00.825140 2939 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:41:00.830640 kubelet[2939]: E0313 00:41:00.830584 2939 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:41:00.830640 kubelet[2939]: I0313 00:41:00.830639 2939 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:00.832678 kubelet[2939]: E0313 00:41:00.832641 2939 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:01.230896 kubelet[2939]: I0313 00:41:01.230853 2939 apiserver.go:52] "Watching apiserver" Mar 13 00:41:01.264280 kubelet[2939]: I0313 00:41:01.264241 2939 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:41:01.335576 kubelet[2939]: I0313 00:41:01.335536 2939 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:01.341559 kubelet[2939]: E0313 00:41:01.341444 2939 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:04.450472 systemd[1]: Reload requested from client PID 3221 ('systemctl') (unit session-7.scope)... Mar 13 00:41:04.450490 systemd[1]: Reloading... Mar 13 00:41:04.597385 zram_generator::config[3266]: No configuration found. Mar 13 00:41:04.893047 systemd[1]: Reloading finished in 442 ms. Mar 13 00:41:04.930280 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:04.942916 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:41:04.943243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:04.943326 systemd[1]: kubelet.service: Consumed 778ms CPU time, 119.8M memory peak. Mar 13 00:41:04.945847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:05.412550 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:41:05.445120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:05.455829 (kubelet)[3329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:41:05.518875 kubelet[3329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:41:05.530851 kubelet[3329]: I0313 00:41:05.530735 3329 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 13 00:41:05.530851 kubelet[3329]: I0313 00:41:05.530785 3329 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:41:05.533362 kubelet[3329]: I0313 00:41:05.532504 3329 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:41:05.533362 kubelet[3329]: I0313 00:41:05.532533 3329 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:41:05.533362 kubelet[3329]: I0313 00:41:05.533162 3329 server.go:951] "Client rotation is on, will bootstrap in background" Mar 13 00:41:05.534761 kubelet[3329]: I0313 00:41:05.534741 3329 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:41:05.540445 kubelet[3329]: I0313 00:41:05.540397 3329 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:41:05.550022 kubelet[3329]: I0313 00:41:05.549996 3329 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:41:05.558073 kubelet[3329]: I0313 00:41:05.558040 3329 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:41:05.561115 kubelet[3329]: I0313 00:41:05.560093 3329 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:41:05.561115 kubelet[3329]: I0313 00:41:05.560144 3329 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:41:05.561115 kubelet[3329]: I0313 00:41:05.560394 3329 topology_manager.go:143] "Creating topology manager with none policy" Mar 13 00:41:05.561115 kubelet[3329]: I0313 00:41:05.560407 3329 container_manager_linux.go:308] "Creating device plugin manager" Mar 13 00:41:05.561444 kubelet[3329]: I0313 00:41:05.560441 3329 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:41:05.561444 kubelet[3329]: I0313 00:41:05.560746 3329 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 13 00:41:05.561444 kubelet[3329]: I0313 00:41:05.561042 3329 kubelet.go:482] "Attempting to sync node with API server" Mar 13 00:41:05.561444 kubelet[3329]: I0313 00:41:05.561063 3329 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:41:05.561705 kubelet[3329]: I0313 00:41:05.561680 3329 kubelet.go:394] "Adding apiserver pod source" Mar 13 00:41:05.561705 kubelet[3329]: I0313 00:41:05.561702 3329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:41:05.565257 kubelet[3329]: I0313 00:41:05.565230 3329 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:41:05.569551 kubelet[3329]: I0313 00:41:05.567564 3329 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:41:05.569551 kubelet[3329]: I0313 00:41:05.567613 3329 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:41:05.578317 kubelet[3329]: I0313 00:41:05.578283 3329 server.go:1257] "Started kubelet" Mar 13 00:41:05.581291 kubelet[3329]: I0313 00:41:05.579943 3329 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:41:05.581291 kubelet[3329]: I0313 00:41:05.580020 3329 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:41:05.581291 kubelet[3329]: I0313 00:41:05.580270 3329 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:41:05.581291 kubelet[3329]: I0313 00:41:05.580321 3329 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:41:05.582424 kubelet[3329]: I0313 00:41:05.582090 3329 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:41:05.586041 kubelet[3329]: I0313 00:41:05.586023 3329 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 13 00:41:05.594255 kubelet[3329]: I0313 00:41:05.594224 3329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:41:05.597776 kubelet[3329]: I0313 00:41:05.597754 3329 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 13 00:41:05.599160 kubelet[3329]: I0313 00:41:05.598996 3329 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:41:05.599160 kubelet[3329]: I0313 00:41:05.599106 3329 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:41:05.599160 kubelet[3329]: I0313 00:41:05.599120 3329 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:41:05.599452 kubelet[3329]: I0313 00:41:05.599243 3329 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:41:05.616321 kubelet[3329]: I0313 00:41:05.615929 3329 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:41:05.634263 kubelet[3329]: I0313 00:41:05.634091 3329 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:41:05.640445 kubelet[3329]: I0313 00:41:05.638937 3329 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:41:05.640445 kubelet[3329]: I0313 00:41:05.638969 3329 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 13 00:41:05.640445 kubelet[3329]: I0313 00:41:05.639003 3329 kubelet.go:2501] "Starting kubelet main sync loop" Mar 13 00:41:05.640445 kubelet[3329]: E0313 00:41:05.639054 3329 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:41:05.683220 kubelet[3329]: I0313 00:41:05.682186 3329 cpu_manager.go:225] "Starting" policy="none" Mar 13 00:41:05.683424 kubelet[3329]: I0313 00:41:05.683405 3329 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 00:41:05.683510 kubelet[3329]: I0313 00:41:05.683500 3329 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 13 00:41:05.683734 kubelet[3329]: I0313 00:41:05.683704 3329 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 13 00:41:05.683979 kubelet[3329]: I0313 00:41:05.683799 3329 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 13 00:41:05.684104 kubelet[3329]: I0313 00:41:05.684093 3329 policy_none.go:50] "Start" Mar 13 00:41:05.684177 kubelet[3329]: I0313 00:41:05.684168 3329 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:41:05.684514 kubelet[3329]: I0313 00:41:05.684236 3329 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:41:05.684514 kubelet[3329]: I0313 00:41:05.684411 3329 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:41:05.684514 kubelet[3329]: I0313 00:41:05.684422 3329 policy_none.go:44] "Start" Mar 13 00:41:05.689793 kubelet[3329]: E0313 00:41:05.689766 3329 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:41:05.690725 kubelet[3329]: I0313 00:41:05.690642 3329 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 13 00:41:05.691275 kubelet[3329]: I0313 00:41:05.691242 3329 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:41:05.692401 kubelet[3329]: I0313 00:41:05.692323 3329 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 13 00:41:05.699413 kubelet[3329]: E0313 00:41:05.697902 3329 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:41:05.741336 kubelet[3329]: I0313 00:41:05.741300 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:05.746171 kubelet[3329]: I0313 00:41:05.745533 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:05.747120 kubelet[3329]: I0313 00:41:05.745709 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:41:05.799253 kubelet[3329]: I0313 00:41:05.799228 3329 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-30-203" Mar 13 00:41:05.800219 kubelet[3329]: I0313 00:41:05.800200 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20277f93df21e4320c48b84bf179d424-ca-certs\") pod \"kube-apiserver-ip-172-31-30-203\" (UID: \"20277f93df21e4320c48b84bf179d424\") " pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:05.800317 kubelet[3329]: I0313 00:41:05.800306 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20277f93df21e4320c48b84bf179d424-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-203\" (UID: \"20277f93df21e4320c48b84bf179d424\") " pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:05.800476 kubelet[3329]: I0313 00:41:05.800423 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20277f93df21e4320c48b84bf179d424-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-203\" (UID: \"20277f93df21e4320c48b84bf179d424\") " pod="kube-system/kube-apiserver-ip-172-31-30-203" Mar 13 00:41:05.810710 kubelet[3329]: I0313 00:41:05.810645 3329 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-30-203" Mar 13 00:41:05.811126 kubelet[3329]: I0313 00:41:05.810947 3329 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-30-203" Mar 13 00:41:05.902735 kubelet[3329]: I0313 00:41:05.902613 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:05.903365 kubelet[3329]: I0313 00:41:05.903028 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a04eb4a478bd8ad44d8cdd9274da08e-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-203\" (UID: \"9a04eb4a478bd8ad44d8cdd9274da08e\") " pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:41:05.903365 kubelet[3329]: I0313 00:41:05.903097 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:05.903365 kubelet[3329]: I0313 00:41:05.903140 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:05.903365 kubelet[3329]: I0313 00:41:05.903164 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:05.903365 kubelet[3329]: I0313 00:41:05.903191 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff56fcbd03dc6078ddf5a88b7cbde198-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-203\" (UID: \"ff56fcbd03dc6078ddf5a88b7cbde198\") " pod="kube-system/kube-controller-manager-ip-172-31-30-203" Mar 13 00:41:06.565531 kubelet[3329]: I0313 00:41:06.565451 3329 apiserver.go:52] "Watching apiserver" Mar 13 00:41:06.599929 kubelet[3329]: I0313 00:41:06.599882 3329 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:41:06.675406 kubelet[3329]: I0313 00:41:06.675377 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:41:06.688990 kubelet[3329]: E0313 00:41:06.688950 3329 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-203\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-203" Mar 13 00:41:06.714591 kubelet[3329]: I0313 00:41:06.714380 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-203" podStartSLOduration=1.714362649 podStartE2EDuration="1.714362649s" podCreationTimestamp="2026-03-13 00:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:06.713977648 +0000 UTC m=+1.250223141" watchObservedRunningTime="2026-03-13 00:41:06.714362649 +0000 UTC m=+1.250608132" Mar 13 00:41:06.726938 kubelet[3329]: I0313 00:41:06.726875 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-203" podStartSLOduration=1.726856062 podStartE2EDuration="1.726856062s" podCreationTimestamp="2026-03-13 00:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:06.726684912 +0000 UTC m=+1.262930405" watchObservedRunningTime="2026-03-13 00:41:06.726856062 +0000 UTC m=+1.263101549" Mar 13 00:41:06.740661 kubelet[3329]: I0313 00:41:06.740597 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-203" podStartSLOduration=1.740480336 podStartE2EDuration="1.740480336s" podCreationTimestamp="2026-03-13 00:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:06.738778108 +0000 UTC m=+1.275023600" watchObservedRunningTime="2026-03-13 00:41:06.740480336 +0000 UTC m=+1.276725829" Mar 13 00:41:09.140822 kubelet[3329]: I0313 00:41:09.140789 3329 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:41:09.141650 kubelet[3329]: I0313 00:41:09.141538 3329 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:41:09.141703 containerd[1987]: time="2026-03-13T00:41:09.141282600Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:41:10.252018 systemd[1]: Created slice kubepods-besteffort-podadd67ed5_baf9_4f1a_a4eb_c50ae10e6fad.slice - libcontainer container kubepods-besteffort-podadd67ed5_baf9_4f1a_a4eb_c50ae10e6fad.slice. Mar 13 00:41:10.333060 kubelet[3329]: I0313 00:41:10.333006 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/add67ed5-baf9-4f1a-a4eb-c50ae10e6fad-xtables-lock\") pod \"kube-proxy-xf6qq\" (UID: \"add67ed5-baf9-4f1a-a4eb-c50ae10e6fad\") " pod="kube-system/kube-proxy-xf6qq" Mar 13 00:41:10.333583 kubelet[3329]: I0313 00:41:10.333093 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzqx\" (UniqueName: \"kubernetes.io/projected/add67ed5-baf9-4f1a-a4eb-c50ae10e6fad-kube-api-access-vnzqx\") pod \"kube-proxy-xf6qq\" (UID: \"add67ed5-baf9-4f1a-a4eb-c50ae10e6fad\") " pod="kube-system/kube-proxy-xf6qq" Mar 13 00:41:10.333583 kubelet[3329]: I0313 00:41:10.333130 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/add67ed5-baf9-4f1a-a4eb-c50ae10e6fad-lib-modules\") pod \"kube-proxy-xf6qq\" (UID: \"add67ed5-baf9-4f1a-a4eb-c50ae10e6fad\") " pod="kube-system/kube-proxy-xf6qq" Mar 13 00:41:10.333583 kubelet[3329]: I0313 00:41:10.333160 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/add67ed5-baf9-4f1a-a4eb-c50ae10e6fad-kube-proxy\") pod \"kube-proxy-xf6qq\" (UID: \"add67ed5-baf9-4f1a-a4eb-c50ae10e6fad\") " pod="kube-system/kube-proxy-xf6qq" Mar 13 00:41:10.366099 systemd[1]: Created slice kubepods-besteffort-pod47bfdf45_8201_4ea9_938d_6a281329c671.slice - libcontainer container kubepods-besteffort-pod47bfdf45_8201_4ea9_938d_6a281329c671.slice. Mar 13 00:41:10.434097 kubelet[3329]: I0313 00:41:10.433683 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmghv\" (UniqueName: \"kubernetes.io/projected/47bfdf45-8201-4ea9-938d-6a281329c671-kube-api-access-wmghv\") pod \"tigera-operator-6cf4cccc57-gs4vv\" (UID: \"47bfdf45-8201-4ea9-938d-6a281329c671\") " pod="tigera-operator/tigera-operator-6cf4cccc57-gs4vv" Mar 13 00:41:10.434097 kubelet[3329]: I0313 00:41:10.433772 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47bfdf45-8201-4ea9-938d-6a281329c671-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-gs4vv\" (UID: \"47bfdf45-8201-4ea9-938d-6a281329c671\") " pod="tigera-operator/tigera-operator-6cf4cccc57-gs4vv" Mar 13 00:41:10.565593 containerd[1987]: time="2026-03-13T00:41:10.565015264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xf6qq,Uid:add67ed5-baf9-4f1a-a4eb-c50ae10e6fad,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:10.621009 containerd[1987]: time="2026-03-13T00:41:10.620951094Z" level=info msg="connecting to shim edae530d1c9fe649eef331c78e7378d9fcfa036daf5b03aaf052b3c9a7bae6da" address="unix:///run/containerd/s/99ee850fd71d87b30e4d865032a2776bd7ebf7256fe98c5e2f33cfa4b1eca200" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:10.654612 systemd[1]: Started cri-containerd-edae530d1c9fe649eef331c78e7378d9fcfa036daf5b03aaf052b3c9a7bae6da.scope - libcontainer container edae530d1c9fe649eef331c78e7378d9fcfa036daf5b03aaf052b3c9a7bae6da. Mar 13 00:41:10.676010 containerd[1987]: time="2026-03-13T00:41:10.675972278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-gs4vv,Uid:47bfdf45-8201-4ea9-938d-6a281329c671,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:41:10.689683 containerd[1987]: time="2026-03-13T00:41:10.689641159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xf6qq,Uid:add67ed5-baf9-4f1a-a4eb-c50ae10e6fad,Namespace:kube-system,Attempt:0,} returns sandbox id \"edae530d1c9fe649eef331c78e7378d9fcfa036daf5b03aaf052b3c9a7bae6da\"" Mar 13 00:41:10.702670 containerd[1987]: time="2026-03-13T00:41:10.702626077Z" level=info msg="CreateContainer within sandbox \"edae530d1c9fe649eef331c78e7378d9fcfa036daf5b03aaf052b3c9a7bae6da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:41:10.713771 containerd[1987]: time="2026-03-13T00:41:10.713720699Z" level=info msg="connecting to shim 72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6" address="unix:///run/containerd/s/231e04bea9251a8928fb54f2b0b96a145a62e2cccfd7c0a4357f51b9cb89944e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:10.744938 containerd[1987]: time="2026-03-13T00:41:10.744323261Z" level=info msg="Container ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:10.754619 systemd[1]: Started cri-containerd-72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6.scope - libcontainer container 72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6. Mar 13 00:41:10.755318 containerd[1987]: time="2026-03-13T00:41:10.755015179Z" level=info msg="CreateContainer within sandbox \"edae530d1c9fe649eef331c78e7378d9fcfa036daf5b03aaf052b3c9a7bae6da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c\"" Mar 13 00:41:10.758115 containerd[1987]: time="2026-03-13T00:41:10.757691697Z" level=info msg="StartContainer for \"ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c\"" Mar 13 00:41:10.761192 containerd[1987]: time="2026-03-13T00:41:10.761101581Z" level=info msg="connecting to shim ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c" address="unix:///run/containerd/s/99ee850fd71d87b30e4d865032a2776bd7ebf7256fe98c5e2f33cfa4b1eca200" protocol=ttrpc version=3 Mar 13 00:41:10.788498 systemd[1]: Started cri-containerd-ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c.scope - libcontainer container ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c. Mar 13 00:41:10.851128 containerd[1987]: time="2026-03-13T00:41:10.850252100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-gs4vv,Uid:47bfdf45-8201-4ea9-938d-6a281329c671,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6\"" Mar 13 00:41:10.864791 containerd[1987]: time="2026-03-13T00:41:10.864683040Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:41:10.891415 containerd[1987]: time="2026-03-13T00:41:10.891374212Z" level=info msg="StartContainer for \"ef75c1b3bed92d71202350df83a63e61a5aa0e66a09454fd4e27dc1c9035202c\" returns successfully" Mar 13 00:41:11.735955 kubelet[3329]: I0313 00:41:11.735879 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-xf6qq" podStartSLOduration=1.734577792 podStartE2EDuration="1.734577792s" podCreationTimestamp="2026-03-13 00:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:11.7125802 +0000 UTC m=+6.248825693" watchObservedRunningTime="2026-03-13 00:41:11.734577792 +0000 UTC m=+6.270823284" Mar 13 00:41:12.250301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675734212.mount: Deactivated successfully. Mar 13 00:41:15.182297 containerd[1987]: time="2026-03-13T00:41:15.182243050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:15.183728 containerd[1987]: time="2026-03-13T00:41:15.183505850Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:41:15.185106 containerd[1987]: time="2026-03-13T00:41:15.185056268Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:15.188356 containerd[1987]: time="2026-03-13T00:41:15.188298577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:15.189141 containerd[1987]: time="2026-03-13T00:41:15.189105576Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.324361652s" Mar 13 00:41:15.189270 containerd[1987]: time="2026-03-13T00:41:15.189251458Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:41:15.196277 containerd[1987]: time="2026-03-13T00:41:15.196227584Z" level=info msg="CreateContainer within sandbox \"72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:41:15.206141 containerd[1987]: time="2026-03-13T00:41:15.204831153Z" level=info msg="Container 3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:15.210892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144913401.mount: Deactivated successfully. Mar 13 00:41:15.219368 containerd[1987]: time="2026-03-13T00:41:15.218484399Z" level=info msg="CreateContainer within sandbox \"72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236\"" Mar 13 00:41:15.220325 containerd[1987]: time="2026-03-13T00:41:15.220289255Z" level=info msg="StartContainer for \"3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236\"" Mar 13 00:41:15.221594 containerd[1987]: time="2026-03-13T00:41:15.221562606Z" level=info msg="connecting to shim 3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236" address="unix:///run/containerd/s/231e04bea9251a8928fb54f2b0b96a145a62e2cccfd7c0a4357f51b9cb89944e" protocol=ttrpc version=3 Mar 13 00:41:15.255633 systemd[1]: Started cri-containerd-3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236.scope - libcontainer container 3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236. Mar 13 00:41:15.290459 containerd[1987]: time="2026-03-13T00:41:15.290316753Z" level=info msg="StartContainer for \"3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236\" returns successfully" Mar 13 00:41:19.300488 update_engine[1967]: I20260313 00:41:19.300413 1967 update_attempter.cc:509] Updating boot flags... Mar 13 00:41:20.153783 kubelet[3329]: I0313 00:41:20.153721 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-gs4vv" podStartSLOduration=5.825635252 podStartE2EDuration="10.15370514s" podCreationTimestamp="2026-03-13 00:41:10 +0000 UTC" firstStartedPulling="2026-03-13 00:41:10.862163546 +0000 UTC m=+5.398409018" lastFinishedPulling="2026-03-13 00:41:15.190233435 +0000 UTC m=+9.726478906" observedRunningTime="2026-03-13 00:41:15.715854959 +0000 UTC m=+10.252100453" watchObservedRunningTime="2026-03-13 00:41:20.15370514 +0000 UTC m=+14.689950634" Mar 13 00:41:23.255138 sudo[2366]: pam_unix(sudo:session): session closed for user root Mar 13 00:41:23.338849 sshd[2365]: Connection closed by 20.161.92.111 port 49840 Mar 13 00:41:23.343086 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:23.349824 systemd-logind[1966]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:41:23.351210 systemd[1]: sshd@6-172.31.30.203:22-20.161.92.111:49840.service: Deactivated successfully. Mar 13 00:41:23.357232 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:41:23.358792 systemd[1]: session-7.scope: Consumed 4.307s CPU time, 167.3M memory peak. Mar 13 00:41:23.366639 systemd-logind[1966]: Removed session 7. Mar 13 00:41:24.685400 systemd[1]: Created slice kubepods-besteffort-podab5985b6_0b69_4ae1_b09c_4545837a0e7c.slice - libcontainer container kubepods-besteffort-podab5985b6_0b69_4ae1_b09c_4545837a0e7c.slice. Mar 13 00:41:24.742331 kubelet[3329]: I0313 00:41:24.742289 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5985b6-0b69-4ae1-b09c-4545837a0e7c-tigera-ca-bundle\") pod \"calico-typha-5686fdff6b-fl8s7\" (UID: \"ab5985b6-0b69-4ae1-b09c-4545837a0e7c\") " pod="calico-system/calico-typha-5686fdff6b-fl8s7" Mar 13 00:41:24.742849 kubelet[3329]: I0313 00:41:24.742364 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkxt\" (UniqueName: \"kubernetes.io/projected/ab5985b6-0b69-4ae1-b09c-4545837a0e7c-kube-api-access-dlkxt\") pod \"calico-typha-5686fdff6b-fl8s7\" (UID: \"ab5985b6-0b69-4ae1-b09c-4545837a0e7c\") " pod="calico-system/calico-typha-5686fdff6b-fl8s7" Mar 13 00:41:24.742849 kubelet[3329]: I0313 00:41:24.742393 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ab5985b6-0b69-4ae1-b09c-4545837a0e7c-typha-certs\") pod \"calico-typha-5686fdff6b-fl8s7\" (UID: \"ab5985b6-0b69-4ae1-b09c-4545837a0e7c\") " pod="calico-system/calico-typha-5686fdff6b-fl8s7" Mar 13 00:41:24.943012 systemd[1]: Created slice kubepods-besteffort-podd3086837_97e3_4bcc_bda2_1a08fd709bfa.slice - libcontainer container kubepods-besteffort-podd3086837_97e3_4bcc_bda2_1a08fd709bfa.slice. Mar 13 00:41:24.999955 containerd[1987]: time="2026-03-13T00:41:24.999767748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5686fdff6b-fl8s7,Uid:ab5985b6-0b69-4ae1-b09c-4545837a0e7c,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:25.045361 kubelet[3329]: I0313 00:41:25.044450 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-cni-log-dir\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045361 kubelet[3329]: I0313 00:41:25.044618 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-cni-net-dir\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045361 kubelet[3329]: I0313 00:41:25.044655 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-sys-fs\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045361 kubelet[3329]: I0313 00:41:25.044680 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-var-run-calico\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045361 kubelet[3329]: I0313 00:41:25.044710 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-bpffs\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045708 kubelet[3329]: I0313 00:41:25.044739 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-policysync\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045708 kubelet[3329]: I0313 00:41:25.044761 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-nodeproc\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045708 kubelet[3329]: I0313 00:41:25.044833 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zbsc\" (UniqueName: \"kubernetes.io/projected/d3086837-97e3-4bcc-bda2-1a08fd709bfa-kube-api-access-4zbsc\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045708 kubelet[3329]: I0313 00:41:25.044865 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-lib-modules\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045708 kubelet[3329]: I0313 00:41:25.045503 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-var-lib-calico\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045920 kubelet[3329]: I0313 00:41:25.045552 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d3086837-97e3-4bcc-bda2-1a08fd709bfa-node-certs\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045920 kubelet[3329]: I0313 00:41:25.045575 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-cni-bin-dir\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045920 kubelet[3329]: I0313 00:41:25.045614 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3086837-97e3-4bcc-bda2-1a08fd709bfa-tigera-ca-bundle\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045920 kubelet[3329]: I0313 00:41:25.045644 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-flexvol-driver-host\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.045920 kubelet[3329]: I0313 00:41:25.045691 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3086837-97e3-4bcc-bda2-1a08fd709bfa-xtables-lock\") pod \"calico-node-t9vks\" (UID: \"d3086837-97e3-4bcc-bda2-1a08fd709bfa\") " pod="calico-system/calico-node-t9vks" Mar 13 00:41:25.057530 kubelet[3329]: E0313 00:41:25.056832 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:25.060495 containerd[1987]: time="2026-03-13T00:41:25.060446259Z" level=info msg="connecting to shim 92717c8116534ca44fe605742e7486d12f2d88adbc5441ea62981aacd30b9956" address="unix:///run/containerd/s/a211b79ec42272ec30f65fd0f1039efd2d9ca0b99b8b99c6ed33bb597943dba4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:25.122637 systemd[1]: Started cri-containerd-92717c8116534ca44fe605742e7486d12f2d88adbc5441ea62981aacd30b9956.scope - libcontainer container 92717c8116534ca44fe605742e7486d12f2d88adbc5441ea62981aacd30b9956. Mar 13 00:41:25.147363 kubelet[3329]: I0313 00:41:25.146214 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91c42f68-cc08-44d5-8392-1334f9631936-socket-dir\") pod \"csi-node-driver-s9l6b\" (UID: \"91c42f68-cc08-44d5-8392-1334f9631936\") " pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:25.149911 kubelet[3329]: I0313 00:41:25.148632 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91c42f68-cc08-44d5-8392-1334f9631936-kubelet-dir\") pod \"csi-node-driver-s9l6b\" (UID: \"91c42f68-cc08-44d5-8392-1334f9631936\") " pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:25.149911 kubelet[3329]: I0313 00:41:25.148874 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91c42f68-cc08-44d5-8392-1334f9631936-varrun\") pod \"csi-node-driver-s9l6b\" (UID: \"91c42f68-cc08-44d5-8392-1334f9631936\") " pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:25.149911 kubelet[3329]: I0313 00:41:25.149057 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4xs\" (UniqueName: \"kubernetes.io/projected/91c42f68-cc08-44d5-8392-1334f9631936-kube-api-access-lp4xs\") pod \"csi-node-driver-s9l6b\" (UID: \"91c42f68-cc08-44d5-8392-1334f9631936\") " pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:25.149911 kubelet[3329]: I0313 00:41:25.149125 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91c42f68-cc08-44d5-8392-1334f9631936-registration-dir\") pod \"csi-node-driver-s9l6b\" (UID: \"91c42f68-cc08-44d5-8392-1334f9631936\") " pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:25.171674 kubelet[3329]: E0313 00:41:25.171637 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.171674 kubelet[3329]: W0313 00:41:25.171671 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.171843 kubelet[3329]: E0313 00:41:25.171710 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.196900 kubelet[3329]: E0313 00:41:25.195447 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.197171 kubelet[3329]: W0313 00:41:25.197093 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.197171 kubelet[3329]: E0313 00:41:25.197129 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.250020 kubelet[3329]: E0313 00:41:25.249991 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.250020 kubelet[3329]: W0313 00:41:25.250017 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.250304 kubelet[3329]: E0313 00:41:25.250277 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.251112 kubelet[3329]: E0313 00:41:25.251076 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.251112 kubelet[3329]: W0313 00:41:25.251111 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.251255 kubelet[3329]: E0313 00:41:25.251129 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.251562 kubelet[3329]: E0313 00:41:25.251545 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.251562 kubelet[3329]: W0313 00:41:25.251561 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.251708 kubelet[3329]: E0313 00:41:25.251592 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.252477 kubelet[3329]: E0313 00:41:25.252401 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.252477 kubelet[3329]: W0313 00:41:25.252419 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.252477 kubelet[3329]: E0313 00:41:25.252454 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.253036 kubelet[3329]: E0313 00:41:25.253018 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.253036 kubelet[3329]: W0313 00:41:25.253034 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.253183 kubelet[3329]: E0313 00:41:25.253049 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.253819 kubelet[3329]: E0313 00:41:25.253787 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.253819 kubelet[3329]: W0313 00:41:25.253806 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.254481 kubelet[3329]: E0313 00:41:25.253821 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.254481 kubelet[3329]: E0313 00:41:25.254064 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.254481 kubelet[3329]: W0313 00:41:25.254074 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.254481 kubelet[3329]: E0313 00:41:25.254086 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.254481 kubelet[3329]: E0313 00:41:25.254286 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.254481 kubelet[3329]: W0313 00:41:25.254295 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.254481 kubelet[3329]: E0313 00:41:25.254307 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.255269 kubelet[3329]: E0313 00:41:25.254515 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.255269 kubelet[3329]: W0313 00:41:25.254524 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.255269 kubelet[3329]: E0313 00:41:25.254536 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.255269 kubelet[3329]: E0313 00:41:25.254766 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.255269 kubelet[3329]: W0313 00:41:25.254774 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.255269 kubelet[3329]: E0313 00:41:25.254786 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.255269 kubelet[3329]: E0313 00:41:25.254956 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.255269 kubelet[3329]: W0313 00:41:25.254965 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.255269 kubelet[3329]: E0313 00:41:25.254977 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.255647 containerd[1987]: time="2026-03-13T00:41:25.254672545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5686fdff6b-fl8s7,Uid:ab5985b6-0b69-4ae1-b09c-4545837a0e7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"92717c8116534ca44fe605742e7486d12f2d88adbc5441ea62981aacd30b9956\"" Mar 13 00:41:25.255685 kubelet[3329]: E0313 00:41:25.255468 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.255685 kubelet[3329]: W0313 00:41:25.255481 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.255685 kubelet[3329]: E0313 00:41:25.255495 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.256894 containerd[1987]: time="2026-03-13T00:41:25.256480292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t9vks,Uid:d3086837-97e3-4bcc-bda2-1a08fd709bfa,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:25.257072 kubelet[3329]: E0313 00:41:25.256753 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.257072 kubelet[3329]: W0313 00:41:25.256765 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.257072 kubelet[3329]: E0313 00:41:25.256780 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.257695 kubelet[3329]: E0313 00:41:25.257642 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.257695 kubelet[3329]: W0313 00:41:25.257656 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.257695 kubelet[3329]: E0313 00:41:25.257670 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.257918 kubelet[3329]: E0313 00:41:25.257899 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.257983 kubelet[3329]: W0313 00:41:25.257915 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.257983 kubelet[3329]: E0313 00:41:25.257955 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.258395 kubelet[3329]: E0313 00:41:25.258246 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.258395 kubelet[3329]: W0313 00:41:25.258278 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.258395 kubelet[3329]: E0313 00:41:25.258291 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.258836 kubelet[3329]: E0313 00:41:25.258564 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.258836 kubelet[3329]: W0313 00:41:25.258576 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.258836 kubelet[3329]: E0313 00:41:25.258599 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.259021 kubelet[3329]: E0313 00:41:25.258897 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.259021 kubelet[3329]: W0313 00:41:25.258908 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.259021 kubelet[3329]: E0313 00:41:25.258921 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.259287 kubelet[3329]: E0313 00:41:25.259261 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.259287 kubelet[3329]: W0313 00:41:25.259273 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.259806 kubelet[3329]: E0313 00:41:25.259286 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.259967 kubelet[3329]: E0313 00:41:25.259869 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.259967 kubelet[3329]: W0313 00:41:25.259887 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.260200 kubelet[3329]: E0313 00:41:25.259900 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.260783 kubelet[3329]: E0313 00:41:25.260763 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.260783 kubelet[3329]: W0313 00:41:25.260781 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.260887 kubelet[3329]: E0313 00:41:25.260796 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.261357 kubelet[3329]: E0313 00:41:25.261296 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.261357 kubelet[3329]: W0313 00:41:25.261310 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.261464 kubelet[3329]: E0313 00:41:25.261451 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.263253 kubelet[3329]: E0313 00:41:25.262202 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.263253 kubelet[3329]: W0313 00:41:25.262457 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.263253 kubelet[3329]: E0313 00:41:25.262477 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.263436 containerd[1987]: time="2026-03-13T00:41:25.262671030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:41:25.264220 kubelet[3329]: E0313 00:41:25.264151 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.264220 kubelet[3329]: W0313 00:41:25.264165 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.264220 kubelet[3329]: E0313 00:41:25.264180 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.264682 kubelet[3329]: E0313 00:41:25.264445 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.264682 kubelet[3329]: W0313 00:41:25.264454 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.264682 kubelet[3329]: E0313 00:41:25.264467 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.275103 kubelet[3329]: E0313 00:41:25.275073 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:25.275240 kubelet[3329]: W0313 00:41:25.275124 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:25.275240 kubelet[3329]: E0313 00:41:25.275151 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:25.289784 containerd[1987]: time="2026-03-13T00:41:25.289665563Z" level=info msg="connecting to shim 7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd" address="unix:///run/containerd/s/f77a6d347334502ce7b1b25889c456290c435e2d57826266ad095a44413c0540" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:25.346413 systemd[1]: Started cri-containerd-7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd.scope - libcontainer container 7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd. Mar 13 00:41:25.398948 containerd[1987]: time="2026-03-13T00:41:25.398897650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t9vks,Uid:d3086837-97e3-4bcc-bda2-1a08fd709bfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\"" Mar 13 00:41:26.640014 kubelet[3329]: E0313 00:41:26.639967 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:26.640262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289646297.mount: Deactivated successfully. Mar 13 00:41:27.851111 containerd[1987]: time="2026-03-13T00:41:27.851054446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:27.852501 containerd[1987]: time="2026-03-13T00:41:27.852469141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:41:27.853365 containerd[1987]: time="2026-03-13T00:41:27.853136947Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:27.855473 containerd[1987]: time="2026-03-13T00:41:27.855422814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:27.856356 containerd[1987]: time="2026-03-13T00:41:27.856113315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.593404153s" Mar 13 00:41:27.856356 containerd[1987]: time="2026-03-13T00:41:27.856153463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:41:27.858354 containerd[1987]: time="2026-03-13T00:41:27.858312607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:41:27.890028 containerd[1987]: time="2026-03-13T00:41:27.889965075Z" level=info msg="CreateContainer within sandbox \"92717c8116534ca44fe605742e7486d12f2d88adbc5441ea62981aacd30b9956\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:41:27.906367 containerd[1987]: time="2026-03-13T00:41:27.905420415Z" level=info msg="Container c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:27.917514 containerd[1987]: time="2026-03-13T00:41:27.917466758Z" level=info msg="CreateContainer within sandbox \"92717c8116534ca44fe605742e7486d12f2d88adbc5441ea62981aacd30b9956\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b\"" Mar 13 00:41:27.920503 containerd[1987]: time="2026-03-13T00:41:27.918297325Z" level=info msg="StartContainer for \"c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b\"" Mar 13 00:41:27.920955 containerd[1987]: time="2026-03-13T00:41:27.920917578Z" level=info msg="connecting to shim c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b" address="unix:///run/containerd/s/a211b79ec42272ec30f65fd0f1039efd2d9ca0b99b8b99c6ed33bb597943dba4" protocol=ttrpc version=3 Mar 13 00:41:27.948706 systemd[1]: Started cri-containerd-c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b.scope - libcontainer container c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b. Mar 13 00:41:28.037511 containerd[1987]: time="2026-03-13T00:41:28.037471039Z" level=info msg="StartContainer for \"c51859da03f94930b9b7f14651c2ad363525a42eff02275f8976453347404b1b\" returns successfully" Mar 13 00:41:28.640015 kubelet[3329]: E0313 00:41:28.639949 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:28.869583 kubelet[3329]: E0313 00:41:28.869467 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.869583 kubelet[3329]: W0313 00:41:28.869495 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.869583 kubelet[3329]: E0313 00:41:28.869522 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.870076 kubelet[3329]: E0313 00:41:28.870055 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.870076 kubelet[3329]: W0313 00:41:28.870070 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.870216 kubelet[3329]: E0313 00:41:28.870087 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.870333 kubelet[3329]: E0313 00:41:28.870318 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.870333 kubelet[3329]: W0313 00:41:28.870330 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.870467 kubelet[3329]: E0313 00:41:28.870372 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.870660 kubelet[3329]: E0313 00:41:28.870644 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.870660 kubelet[3329]: W0313 00:41:28.870656 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.870761 kubelet[3329]: E0313 00:41:28.870670 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.870925 kubelet[3329]: E0313 00:41:28.870912 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.871004 kubelet[3329]: W0313 00:41:28.870982 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.871083 kubelet[3329]: E0313 00:41:28.871003 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.871218 kubelet[3329]: E0313 00:41:28.871196 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.871218 kubelet[3329]: W0313 00:41:28.871209 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.871392 kubelet[3329]: E0313 00:41:28.871220 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.871489 kubelet[3329]: E0313 00:41:28.871437 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.871489 kubelet[3329]: W0313 00:41:28.871447 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.871489 kubelet[3329]: E0313 00:41:28.871459 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.871682 kubelet[3329]: E0313 00:41:28.871648 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.871682 kubelet[3329]: W0313 00:41:28.871657 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.871682 kubelet[3329]: E0313 00:41:28.871668 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.871899 kubelet[3329]: E0313 00:41:28.871884 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.871959 kubelet[3329]: W0313 00:41:28.871898 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.871959 kubelet[3329]: E0313 00:41:28.871921 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.872151 kubelet[3329]: E0313 00:41:28.872129 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.872151 kubelet[3329]: W0313 00:41:28.872143 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.872267 kubelet[3329]: E0313 00:41:28.872180 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.872443 kubelet[3329]: E0313 00:41:28.872425 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.872443 kubelet[3329]: W0313 00:41:28.872438 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.872655 kubelet[3329]: E0313 00:41:28.872451 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.872726 kubelet[3329]: E0313 00:41:28.872675 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.872726 kubelet[3329]: W0313 00:41:28.872687 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.872726 kubelet[3329]: E0313 00:41:28.872699 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.872911 kubelet[3329]: E0313 00:41:28.872890 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.872911 kubelet[3329]: W0313 00:41:28.872899 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.873208 kubelet[3329]: E0313 00:41:28.872911 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.873208 kubelet[3329]: E0313 00:41:28.873185 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.873208 kubelet[3329]: W0313 00:41:28.873195 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.873208 kubelet[3329]: E0313 00:41:28.873208 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.873429 kubelet[3329]: E0313 00:41:28.873409 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.873429 kubelet[3329]: W0313 00:41:28.873417 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.873530 kubelet[3329]: E0313 00:41:28.873429 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.884846 kubelet[3329]: E0313 00:41:28.884812 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.884846 kubelet[3329]: W0313 00:41:28.884837 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.885293 kubelet[3329]: E0313 00:41:28.884862 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.885293 kubelet[3329]: E0313 00:41:28.885232 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.885293 kubelet[3329]: W0313 00:41:28.885270 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.885293 kubelet[3329]: E0313 00:41:28.885286 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.885850 kubelet[3329]: E0313 00:41:28.885554 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.885850 kubelet[3329]: W0313 00:41:28.885569 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.885850 kubelet[3329]: E0313 00:41:28.885583 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.885850 kubelet[3329]: E0313 00:41:28.885836 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.885850 kubelet[3329]: W0313 00:41:28.885845 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.886179 kubelet[3329]: E0313 00:41:28.885858 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.886179 kubelet[3329]: E0313 00:41:28.886108 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.886179 kubelet[3329]: W0313 00:41:28.886119 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.886179 kubelet[3329]: E0313 00:41:28.886132 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.886400 kubelet[3329]: E0313 00:41:28.886386 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.886400 kubelet[3329]: W0313 00:41:28.886395 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.886481 kubelet[3329]: E0313 00:41:28.886407 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.886709 kubelet[3329]: E0313 00:41:28.886692 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.886709 kubelet[3329]: W0313 00:41:28.886706 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.886821 kubelet[3329]: E0313 00:41:28.886719 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.886973 kubelet[3329]: E0313 00:41:28.886958 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.886973 kubelet[3329]: W0313 00:41:28.886970 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.887068 kubelet[3329]: E0313 00:41:28.886982 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.887272 kubelet[3329]: E0313 00:41:28.887255 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.887272 kubelet[3329]: W0313 00:41:28.887268 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.887411 kubelet[3329]: E0313 00:41:28.887281 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.887626 kubelet[3329]: E0313 00:41:28.887608 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.888017 kubelet[3329]: W0313 00:41:28.887996 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.888017 kubelet[3329]: E0313 00:41:28.888016 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.888293 kubelet[3329]: E0313 00:41:28.888207 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.888432 kubelet[3329]: W0313 00:41:28.888293 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.888432 kubelet[3329]: E0313 00:41:28.888308 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.888573 kubelet[3329]: E0313 00:41:28.888521 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.888573 kubelet[3329]: W0313 00:41:28.888531 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.888573 kubelet[3329]: E0313 00:41:28.888544 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.888738 kubelet[3329]: E0313 00:41:28.888722 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.888738 kubelet[3329]: W0313 00:41:28.888736 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.888872 kubelet[3329]: E0313 00:41:28.888748 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.889171 kubelet[3329]: E0313 00:41:28.889139 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.889171 kubelet[3329]: W0313 00:41:28.889155 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.889171 kubelet[3329]: E0313 00:41:28.889171 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.889577 kubelet[3329]: E0313 00:41:28.889559 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.889577 kubelet[3329]: W0313 00:41:28.889574 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.889710 kubelet[3329]: E0313 00:41:28.889588 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.889821 kubelet[3329]: E0313 00:41:28.889802 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.889821 kubelet[3329]: W0313 00:41:28.889817 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.889920 kubelet[3329]: E0313 00:41:28.889833 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.890284 kubelet[3329]: E0313 00:41:28.890194 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.890284 kubelet[3329]: W0313 00:41:28.890208 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.890284 kubelet[3329]: E0313 00:41:28.890221 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:28.891527 kubelet[3329]: E0313 00:41:28.890442 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:41:28.891527 kubelet[3329]: W0313 00:41:28.890452 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:41:28.891527 kubelet[3329]: E0313 00:41:28.890463 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:41:29.314960 containerd[1987]: time="2026-03-13T00:41:29.314830083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:29.316300 containerd[1987]: time="2026-03-13T00:41:29.316134167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:41:29.317411 containerd[1987]: time="2026-03-13T00:41:29.317375094Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:29.319837 containerd[1987]: time="2026-03-13T00:41:29.319738741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:29.320601 containerd[1987]: time="2026-03-13T00:41:29.320539084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.462174003s" Mar 13 00:41:29.320601 containerd[1987]: time="2026-03-13T00:41:29.320578600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:41:29.327987 containerd[1987]: time="2026-03-13T00:41:29.327941310Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:41:29.349359 containerd[1987]: time="2026-03-13T00:41:29.347922459Z" level=info msg="Container 6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:29.360951 containerd[1987]: time="2026-03-13T00:41:29.360900098Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68\"" Mar 13 00:41:29.363789 containerd[1987]: time="2026-03-13T00:41:29.362481826Z" level=info msg="StartContainer for \"6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68\"" Mar 13 00:41:29.364649 containerd[1987]: time="2026-03-13T00:41:29.364609366Z" level=info msg="connecting to shim 6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68" address="unix:///run/containerd/s/f77a6d347334502ce7b1b25889c456290c435e2d57826266ad095a44413c0540" protocol=ttrpc version=3 Mar 13 00:41:29.407609 systemd[1]: Started cri-containerd-6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68.scope - libcontainer container 6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68. Mar 13 00:41:29.483165 containerd[1987]: time="2026-03-13T00:41:29.483111764Z" level=info msg="StartContainer for \"6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68\" returns successfully" Mar 13 00:41:29.496058 systemd[1]: cri-containerd-6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68.scope: Deactivated successfully. Mar 13 00:41:29.530264 containerd[1987]: time="2026-03-13T00:41:29.530188070Z" level=info msg="received container exit event container_id:\"6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68\" id:\"6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68\" pid:4228 exited_at:{seconds:1773362489 nanos:499020556}" Mar 13 00:41:29.561573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6688647dfd793751e4129cbb0cea1ed120e210002cf925816286ca01a431cd68-rootfs.mount: Deactivated successfully. Mar 13 00:41:29.774973 kubelet[3329]: I0313 00:41:29.774905 3329 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:41:29.776929 containerd[1987]: time="2026-03-13T00:41:29.776895068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:41:29.800045 kubelet[3329]: I0313 00:41:29.797543 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5686fdff6b-fl8s7" podStartSLOduration=3.201238635 podStartE2EDuration="5.797525264s" podCreationTimestamp="2026-03-13 00:41:24 +0000 UTC" firstStartedPulling="2026-03-13 00:41:25.260845246 +0000 UTC m=+19.797090724" lastFinishedPulling="2026-03-13 00:41:27.857131882 +0000 UTC m=+22.393377353" observedRunningTime="2026-03-13 00:41:28.795309163 +0000 UTC m=+23.331554655" watchObservedRunningTime="2026-03-13 00:41:29.797525264 +0000 UTC m=+24.333770758" Mar 13 00:41:30.640036 kubelet[3329]: E0313 00:41:30.639985 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:32.647587 kubelet[3329]: E0313 00:41:32.647401 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:34.640364 kubelet[3329]: E0313 00:41:34.640299 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:36.640244 kubelet[3329]: E0313 00:41:36.639836 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:38.639574 kubelet[3329]: E0313 00:41:38.639509 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:40.639611 kubelet[3329]: E0313 00:41:40.639550 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:41.470939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081595997.mount: Deactivated successfully. Mar 13 00:41:41.556366 containerd[1987]: time="2026-03-13T00:41:41.534219144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:41.557001 containerd[1987]: time="2026-03-13T00:41:41.555952094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:41:41.563881 containerd[1987]: time="2026-03-13T00:41:41.563826878Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:41.568127 containerd[1987]: time="2026-03-13T00:41:41.567328540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:41.568127 containerd[1987]: time="2026-03-13T00:41:41.567993237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.791053764s" Mar 13 00:41:41.568127 containerd[1987]: time="2026-03-13T00:41:41.568025857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:41:41.574204 containerd[1987]: time="2026-03-13T00:41:41.574140624Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:41:41.599498 containerd[1987]: time="2026-03-13T00:41:41.599452326Z" level=info msg="Container 6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:41.603197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483645929.mount: Deactivated successfully. Mar 13 00:41:41.638488 containerd[1987]: time="2026-03-13T00:41:41.638441614Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b\"" Mar 13 00:41:41.639530 containerd[1987]: time="2026-03-13T00:41:41.639423232Z" level=info msg="StartContainer for \"6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b\"" Mar 13 00:41:41.643581 containerd[1987]: time="2026-03-13T00:41:41.643538858Z" level=info msg="connecting to shim 6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b" address="unix:///run/containerd/s/f77a6d347334502ce7b1b25889c456290c435e2d57826266ad095a44413c0540" protocol=ttrpc version=3 Mar 13 00:41:41.722533 systemd[1]: Started cri-containerd-6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b.scope - libcontainer container 6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b. Mar 13 00:41:41.819384 containerd[1987]: time="2026-03-13T00:41:41.819282953Z" level=info msg="StartContainer for \"6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b\" returns successfully" Mar 13 00:41:42.044474 systemd[1]: cri-containerd-6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b.scope: Deactivated successfully. Mar 13 00:41:42.044860 systemd[1]: cri-containerd-6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b.scope: Consumed 95ms CPU time, 39.4M memory peak, 19M read from disk. Mar 13 00:41:42.082401 containerd[1987]: time="2026-03-13T00:41:42.082313593Z" level=info msg="received container exit event container_id:\"6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b\" id:\"6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b\" pid:4294 exited_at:{seconds:1773362502 nanos:81910802}" Mar 13 00:41:42.470990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6226555baf7ffb7699fb7156293bced86d4e65efc321f07c90acde82ceeb615b-rootfs.mount: Deactivated successfully. Mar 13 00:41:42.640059 kubelet[3329]: E0313 00:41:42.640007 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:42.831192 containerd[1987]: time="2026-03-13T00:41:42.831038830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:41:44.640496 kubelet[3329]: E0313 00:41:44.640369 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:45.832150 containerd[1987]: time="2026-03-13T00:41:45.832074181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:45.833428 containerd[1987]: time="2026-03-13T00:41:45.833389526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:41:45.835267 containerd[1987]: time="2026-03-13T00:41:45.834714338Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:45.838411 containerd[1987]: time="2026-03-13T00:41:45.838380675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:45.839390 containerd[1987]: time="2026-03-13T00:41:45.839355079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.008237746s" Mar 13 00:41:45.839493 containerd[1987]: time="2026-03-13T00:41:45.839393409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:41:45.847045 containerd[1987]: time="2026-03-13T00:41:45.846997046Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:41:45.874592 containerd[1987]: time="2026-03-13T00:41:45.874549973Z" level=info msg="Container 78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:45.892186 containerd[1987]: time="2026-03-13T00:41:45.892047770Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39\"" Mar 13 00:41:45.893291 containerd[1987]: time="2026-03-13T00:41:45.893070744Z" level=info msg="StartContainer for \"78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39\"" Mar 13 00:41:45.895493 containerd[1987]: time="2026-03-13T00:41:45.895457522Z" level=info msg="connecting to shim 78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39" address="unix:///run/containerd/s/f77a6d347334502ce7b1b25889c456290c435e2d57826266ad095a44413c0540" protocol=ttrpc version=3 Mar 13 00:41:45.927583 systemd[1]: Started cri-containerd-78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39.scope - libcontainer container 78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39. Mar 13 00:41:46.027358 containerd[1987]: time="2026-03-13T00:41:46.027255451Z" level=info msg="StartContainer for \"78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39\" returns successfully" Mar 13 00:41:46.639448 kubelet[3329]: E0313 00:41:46.639395 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:46.813857 systemd[1]: cri-containerd-78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39.scope: Deactivated successfully. Mar 13 00:41:46.814229 systemd[1]: cri-containerd-78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39.scope: Consumed 601ms CPU time, 171.3M memory peak, 6.2M read from disk, 177M written to disk. Mar 13 00:41:46.821169 containerd[1987]: time="2026-03-13T00:41:46.821098252Z" level=info msg="received container exit event container_id:\"78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39\" id:\"78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39\" pid:4354 exited_at:{seconds:1773362506 nanos:819204033}" Mar 13 00:41:46.915162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78a2c0adfce64f42019ccd8f8a7f8d2c2d880004df86d40ac027ea2b8002df39-rootfs.mount: Deactivated successfully. Mar 13 00:41:46.934045 kubelet[3329]: I0313 00:41:46.933737 3329 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 13 00:41:47.242757 systemd[1]: Created slice kubepods-burstable-podf539e182_c005_4e5f_b423_9765e83f57bd.slice - libcontainer container kubepods-burstable-podf539e182_c005_4e5f_b423_9765e83f57bd.slice. Mar 13 00:41:47.261556 systemd[1]: Created slice kubepods-besteffort-pod76c51aee_1eb6_4a51_9e52_4fb871cbe059.slice - libcontainer container kubepods-besteffort-pod76c51aee_1eb6_4a51_9e52_4fb871cbe059.slice. Mar 13 00:41:47.272642 systemd[1]: Created slice kubepods-burstable-pod1d4c1b77_472e_498a_9a86_012ddbcbd0a3.slice - libcontainer container kubepods-burstable-pod1d4c1b77_472e_498a_9a86_012ddbcbd0a3.slice. Mar 13 00:41:47.281734 systemd[1]: Created slice kubepods-besteffort-poda572913e_ac86_4e32_ab1d_883c425c261c.slice - libcontainer container kubepods-besteffort-poda572913e_ac86_4e32_ab1d_883c425c261c.slice. Mar 13 00:41:47.324747 kubelet[3329]: I0313 00:41:47.324680 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d4c1b77-472e-498a-9a86-012ddbcbd0a3-config-volume\") pod \"coredns-7d764666f9-xn5k5\" (UID: \"1d4c1b77-472e-498a-9a86-012ddbcbd0a3\") " pod="kube-system/coredns-7d764666f9-xn5k5" Mar 13 00:41:47.324747 kubelet[3329]: I0313 00:41:47.324731 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a572913e-ac86-4e32-ab1d-883c425c261c-calico-apiserver-certs\") pod \"calico-apiserver-7dcd7cbd55-8kvp5\" (UID: \"a572913e-ac86-4e32-ab1d-883c425c261c\") " pod="calico-system/calico-apiserver-7dcd7cbd55-8kvp5" Mar 13 00:41:47.324975 kubelet[3329]: I0313 00:41:47.324758 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqb7v\" (UniqueName: \"kubernetes.io/projected/f539e182-c005-4e5f-b423-9765e83f57bd-kube-api-access-cqb7v\") pod \"coredns-7d764666f9-gb929\" (UID: \"f539e182-c005-4e5f-b423-9765e83f57bd\") " pod="kube-system/coredns-7d764666f9-gb929" Mar 13 00:41:47.324975 kubelet[3329]: I0313 00:41:47.324782 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c51aee-1eb6-4a51-9e52-4fb871cbe059-tigera-ca-bundle\") pod \"calico-kube-controllers-59466cf6fd-dg9jn\" (UID: \"76c51aee-1eb6-4a51-9e52-4fb871cbe059\") " pod="calico-system/calico-kube-controllers-59466cf6fd-dg9jn" Mar 13 00:41:47.324975 kubelet[3329]: I0313 00:41:47.324801 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cd2s\" (UniqueName: \"kubernetes.io/projected/a572913e-ac86-4e32-ab1d-883c425c261c-kube-api-access-8cd2s\") pod \"calico-apiserver-7dcd7cbd55-8kvp5\" (UID: \"a572913e-ac86-4e32-ab1d-883c425c261c\") " pod="calico-system/calico-apiserver-7dcd7cbd55-8kvp5" Mar 13 00:41:47.324975 kubelet[3329]: I0313 00:41:47.324820 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f539e182-c005-4e5f-b423-9765e83f57bd-config-volume\") pod \"coredns-7d764666f9-gb929\" (UID: \"f539e182-c005-4e5f-b423-9765e83f57bd\") " pod="kube-system/coredns-7d764666f9-gb929" Mar 13 00:41:47.324975 kubelet[3329]: I0313 00:41:47.324842 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhxq\" (UniqueName: \"kubernetes.io/projected/76c51aee-1eb6-4a51-9e52-4fb871cbe059-kube-api-access-4fhxq\") pod \"calico-kube-controllers-59466cf6fd-dg9jn\" (UID: \"76c51aee-1eb6-4a51-9e52-4fb871cbe059\") " pod="calico-system/calico-kube-controllers-59466cf6fd-dg9jn" Mar 13 00:41:47.325793 kubelet[3329]: I0313 00:41:47.324875 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfw7q\" (UniqueName: \"kubernetes.io/projected/1d4c1b77-472e-498a-9a86-012ddbcbd0a3-kube-api-access-mfw7q\") pod \"coredns-7d764666f9-xn5k5\" (UID: \"1d4c1b77-472e-498a-9a86-012ddbcbd0a3\") " pod="kube-system/coredns-7d764666f9-xn5k5" Mar 13 00:41:47.344038 systemd[1]: Created slice kubepods-besteffort-podca7d0db0_4f56_4491_a352_bfabf2912a53.slice - libcontainer container kubepods-besteffort-podca7d0db0_4f56_4491_a352_bfabf2912a53.slice. Mar 13 00:41:47.355260 systemd[1]: Created slice kubepods-besteffort-pod924e8261_afd4_493f_949c_c57b7c9fdec4.slice - libcontainer container kubepods-besteffort-pod924e8261_afd4_493f_949c_c57b7c9fdec4.slice. Mar 13 00:41:47.369639 systemd[1]: Created slice kubepods-besteffort-pod5bf3d678_62d5_4720_9956_5f123a7e4d24.slice - libcontainer container kubepods-besteffort-pod5bf3d678_62d5_4720_9956_5f123a7e4d24.slice. Mar 13 00:41:47.426401 kubelet[3329]: I0313 00:41:47.425539 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bf3d678-62d5-4720-9956-5f123a7e4d24-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-52fsp\" (UID: \"5bf3d678-62d5-4720-9956-5f123a7e4d24\") " pod="calico-system/goldmane-9f7667bb8-52fsp" Mar 13 00:41:47.426401 kubelet[3329]: I0313 00:41:47.425592 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-nginx-config\") pod \"whisker-5c769f8d47-hq28p\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " pod="calico-system/whisker-5c769f8d47-hq28p" Mar 13 00:41:47.426401 kubelet[3329]: I0313 00:41:47.425616 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-backend-key-pair\") pod \"whisker-5c769f8d47-hq28p\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " pod="calico-system/whisker-5c769f8d47-hq28p" Mar 13 00:41:47.426401 kubelet[3329]: I0313 00:41:47.425701 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca7d0db0-4f56-4491-a352-bfabf2912a53-calico-apiserver-certs\") pod \"calico-apiserver-7dcd7cbd55-7k8xf\" (UID: \"ca7d0db0-4f56-4491-a352-bfabf2912a53\") " pod="calico-system/calico-apiserver-7dcd7cbd55-7k8xf" Mar 13 00:41:47.426401 kubelet[3329]: I0313 00:41:47.425747 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7c5\" (UniqueName: \"kubernetes.io/projected/ca7d0db0-4f56-4491-a352-bfabf2912a53-kube-api-access-5q7c5\") pod \"calico-apiserver-7dcd7cbd55-7k8xf\" (UID: \"ca7d0db0-4f56-4491-a352-bfabf2912a53\") " pod="calico-system/calico-apiserver-7dcd7cbd55-7k8xf" Mar 13 00:41:47.426738 kubelet[3329]: I0313 00:41:47.425772 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf3d678-62d5-4720-9956-5f123a7e4d24-config\") pod \"goldmane-9f7667bb8-52fsp\" (UID: \"5bf3d678-62d5-4720-9956-5f123a7e4d24\") " pod="calico-system/goldmane-9f7667bb8-52fsp" Mar 13 00:41:47.426738 kubelet[3329]: I0313 00:41:47.425794 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfrn\" (UniqueName: \"kubernetes.io/projected/924e8261-afd4-493f-949c-c57b7c9fdec4-kube-api-access-7nfrn\") pod \"whisker-5c769f8d47-hq28p\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " pod="calico-system/whisker-5c769f8d47-hq28p" Mar 13 00:41:47.426738 kubelet[3329]: I0313 00:41:47.425818 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5bf3d678-62d5-4720-9956-5f123a7e4d24-goldmane-key-pair\") pod \"goldmane-9f7667bb8-52fsp\" (UID: \"5bf3d678-62d5-4720-9956-5f123a7e4d24\") " pod="calico-system/goldmane-9f7667bb8-52fsp" Mar 13 00:41:47.426738 kubelet[3329]: I0313 00:41:47.425843 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7tnf\" (UniqueName: \"kubernetes.io/projected/5bf3d678-62d5-4720-9956-5f123a7e4d24-kube-api-access-p7tnf\") pod \"goldmane-9f7667bb8-52fsp\" (UID: \"5bf3d678-62d5-4720-9956-5f123a7e4d24\") " pod="calico-system/goldmane-9f7667bb8-52fsp" Mar 13 00:41:47.426738 kubelet[3329]: I0313 00:41:47.425867 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-ca-bundle\") pod \"whisker-5c769f8d47-hq28p\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " pod="calico-system/whisker-5c769f8d47-hq28p" Mar 13 00:41:47.558304 containerd[1987]: time="2026-03-13T00:41:47.557306599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gb929,Uid:f539e182-c005-4e5f-b423-9765e83f57bd,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:47.569682 containerd[1987]: time="2026-03-13T00:41:47.569638545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466cf6fd-dg9jn,Uid:76c51aee-1eb6-4a51-9e52-4fb871cbe059,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:47.586473 containerd[1987]: time="2026-03-13T00:41:47.586432031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xn5k5,Uid:1d4c1b77-472e-498a-9a86-012ddbcbd0a3,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:47.588104 containerd[1987]: time="2026-03-13T00:41:47.587891436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-8kvp5,Uid:a572913e-ac86-4e32-ab1d-883c425c261c,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:47.654113 containerd[1987]: time="2026-03-13T00:41:47.654069776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-7k8xf,Uid:ca7d0db0-4f56-4491-a352-bfabf2912a53,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:47.668328 containerd[1987]: time="2026-03-13T00:41:47.667893136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c769f8d47-hq28p,Uid:924e8261-afd4-493f-949c-c57b7c9fdec4,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:47.677769 containerd[1987]: time="2026-03-13T00:41:47.677738841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-52fsp,Uid:5bf3d678-62d5-4720-9956-5f123a7e4d24,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:47.983940 containerd[1987]: time="2026-03-13T00:41:47.983896507Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:41:48.009406 containerd[1987]: time="2026-03-13T00:41:48.007591687Z" level=info msg="Container e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:48.018148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160767099.mount: Deactivated successfully. Mar 13 00:41:48.043500 containerd[1987]: time="2026-03-13T00:41:48.043439741Z" level=info msg="CreateContainer within sandbox \"7b42e84c7ec77e7f65bd58802e50927d60cfd8e11cb43b7fdf236eb2127177cd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f\"" Mar 13 00:41:48.047677 containerd[1987]: time="2026-03-13T00:41:48.047636381Z" level=info msg="StartContainer for \"e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f\"" Mar 13 00:41:48.055774 containerd[1987]: time="2026-03-13T00:41:48.055154034Z" level=info msg="connecting to shim e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f" address="unix:///run/containerd/s/f77a6d347334502ce7b1b25889c456290c435e2d57826266ad095a44413c0540" protocol=ttrpc version=3 Mar 13 00:41:48.146866 systemd[1]: Started cri-containerd-e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f.scope - libcontainer container e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f. Mar 13 00:41:48.196420 containerd[1987]: time="2026-03-13T00:41:48.196361498Z" level=error msg="Failed to destroy network for sandbox \"43b24c35c8801a2442ee676bc9ae16ae36e8f4420e15cf6b17357d035587fca9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.199142 containerd[1987]: time="2026-03-13T00:41:48.199023702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466cf6fd-dg9jn,Uid:76c51aee-1eb6-4a51-9e52-4fb871cbe059,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b24c35c8801a2442ee676bc9ae16ae36e8f4420e15cf6b17357d035587fca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.202179 kubelet[3329]: E0313 00:41:48.201763 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b24c35c8801a2442ee676bc9ae16ae36e8f4420e15cf6b17357d035587fca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.202179 kubelet[3329]: E0313 00:41:48.201853 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b24c35c8801a2442ee676bc9ae16ae36e8f4420e15cf6b17357d035587fca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59466cf6fd-dg9jn" Mar 13 00:41:48.202179 kubelet[3329]: E0313 00:41:48.201880 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b24c35c8801a2442ee676bc9ae16ae36e8f4420e15cf6b17357d035587fca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59466cf6fd-dg9jn" Mar 13 00:41:48.203374 kubelet[3329]: E0313 00:41:48.201967 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59466cf6fd-dg9jn_calico-system(76c51aee-1eb6-4a51-9e52-4fb871cbe059)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59466cf6fd-dg9jn_calico-system(76c51aee-1eb6-4a51-9e52-4fb871cbe059)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43b24c35c8801a2442ee676bc9ae16ae36e8f4420e15cf6b17357d035587fca9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59466cf6fd-dg9jn" podUID="76c51aee-1eb6-4a51-9e52-4fb871cbe059" Mar 13 00:41:48.240998 containerd[1987]: time="2026-03-13T00:41:48.240218964Z" level=error msg="Failed to destroy network for sandbox \"08dc20a72ae8afe915e9307749ce4627d03a97317a5236209bd6053c1d77ae6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.243405 containerd[1987]: time="2026-03-13T00:41:48.242696873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c769f8d47-hq28p,Uid:924e8261-afd4-493f-949c-c57b7c9fdec4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08dc20a72ae8afe915e9307749ce4627d03a97317a5236209bd6053c1d77ae6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.243894 kubelet[3329]: E0313 00:41:48.243841 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08dc20a72ae8afe915e9307749ce4627d03a97317a5236209bd6053c1d77ae6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.243983 kubelet[3329]: E0313 00:41:48.243910 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08dc20a72ae8afe915e9307749ce4627d03a97317a5236209bd6053c1d77ae6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c769f8d47-hq28p" Mar 13 00:41:48.243983 kubelet[3329]: E0313 00:41:48.243935 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08dc20a72ae8afe915e9307749ce4627d03a97317a5236209bd6053c1d77ae6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c769f8d47-hq28p" Mar 13 00:41:48.244067 kubelet[3329]: E0313 00:41:48.244000 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c769f8d47-hq28p_calico-system(924e8261-afd4-493f-949c-c57b7c9fdec4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c769f8d47-hq28p_calico-system(924e8261-afd4-493f-949c-c57b7c9fdec4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08dc20a72ae8afe915e9307749ce4627d03a97317a5236209bd6053c1d77ae6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c769f8d47-hq28p" podUID="924e8261-afd4-493f-949c-c57b7c9fdec4" Mar 13 00:41:48.273376 containerd[1987]: time="2026-03-13T00:41:48.273276041Z" level=error msg="Failed to destroy network for sandbox \"ffc9a8b8e3abc3bb6fcb3ce10288c75475a07f03110243a801b2c2b7204170a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.274753 containerd[1987]: time="2026-03-13T00:41:48.274703180Z" level=error msg="Failed to destroy network for sandbox \"153eefa4b984e06c1ad0cb3f18ca240d9a404243f517fb72018c5d8833742bd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.275986 containerd[1987]: time="2026-03-13T00:41:48.274898400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-52fsp,Uid:5bf3d678-62d5-4720-9956-5f123a7e4d24,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc9a8b8e3abc3bb6fcb3ce10288c75475a07f03110243a801b2c2b7204170a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.276253 kubelet[3329]: E0313 00:41:48.276109 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc9a8b8e3abc3bb6fcb3ce10288c75475a07f03110243a801b2c2b7204170a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.276253 kubelet[3329]: E0313 00:41:48.276166 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc9a8b8e3abc3bb6fcb3ce10288c75475a07f03110243a801b2c2b7204170a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-52fsp" Mar 13 00:41:48.276253 kubelet[3329]: E0313 00:41:48.276190 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc9a8b8e3abc3bb6fcb3ce10288c75475a07f03110243a801b2c2b7204170a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-52fsp" Mar 13 00:41:48.276862 kubelet[3329]: E0313 00:41:48.276252 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-52fsp_calico-system(5bf3d678-62d5-4720-9956-5f123a7e4d24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-52fsp_calico-system(5bf3d678-62d5-4720-9956-5f123a7e4d24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffc9a8b8e3abc3bb6fcb3ce10288c75475a07f03110243a801b2c2b7204170a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-52fsp" podUID="5bf3d678-62d5-4720-9956-5f123a7e4d24" Mar 13 00:41:48.277077 containerd[1987]: time="2026-03-13T00:41:48.276627294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-8kvp5,Uid:a572913e-ac86-4e32-ab1d-883c425c261c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefa4b984e06c1ad0cb3f18ca240d9a404243f517fb72018c5d8833742bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.277775 kubelet[3329]: E0313 00:41:48.277080 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefa4b984e06c1ad0cb3f18ca240d9a404243f517fb72018c5d8833742bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.277775 kubelet[3329]: E0313 00:41:48.277153 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefa4b984e06c1ad0cb3f18ca240d9a404243f517fb72018c5d8833742bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7dcd7cbd55-8kvp5" Mar 13 00:41:48.278621 kubelet[3329]: E0313 00:41:48.278259 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefa4b984e06c1ad0cb3f18ca240d9a404243f517fb72018c5d8833742bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7dcd7cbd55-8kvp5" Mar 13 00:41:48.278860 kubelet[3329]: E0313 00:41:48.278596 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dcd7cbd55-8kvp5_calico-system(a572913e-ac86-4e32-ab1d-883c425c261c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dcd7cbd55-8kvp5_calico-system(a572913e-ac86-4e32-ab1d-883c425c261c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"153eefa4b984e06c1ad0cb3f18ca240d9a404243f517fb72018c5d8833742bd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7dcd7cbd55-8kvp5" podUID="a572913e-ac86-4e32-ab1d-883c425c261c" Mar 13 00:41:48.279689 containerd[1987]: time="2026-03-13T00:41:48.279472125Z" level=error msg="Failed to destroy network for sandbox \"acf4b5f47dad814b78dc72876d6a93431718fea28a683da40c5994486d89b9ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.281865 containerd[1987]: time="2026-03-13T00:41:48.281817428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gb929,Uid:f539e182-c005-4e5f-b423-9765e83f57bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf4b5f47dad814b78dc72876d6a93431718fea28a683da40c5994486d89b9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.282101 kubelet[3329]: E0313 00:41:48.282064 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf4b5f47dad814b78dc72876d6a93431718fea28a683da40c5994486d89b9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.282170 kubelet[3329]: E0313 00:41:48.282123 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf4b5f47dad814b78dc72876d6a93431718fea28a683da40c5994486d89b9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gb929" Mar 13 00:41:48.282170 kubelet[3329]: E0313 00:41:48.282148 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf4b5f47dad814b78dc72876d6a93431718fea28a683da40c5994486d89b9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gb929" Mar 13 00:41:48.282267 kubelet[3329]: E0313 00:41:48.282217 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-gb929_kube-system(f539e182-c005-4e5f-b423-9765e83f57bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-gb929_kube-system(f539e182-c005-4e5f-b423-9765e83f57bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acf4b5f47dad814b78dc72876d6a93431718fea28a683da40c5994486d89b9ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gb929" podUID="f539e182-c005-4e5f-b423-9765e83f57bd" Mar 13 00:41:48.289005 containerd[1987]: time="2026-03-13T00:41:48.288951592Z" level=error msg="Failed to destroy network for sandbox \"4f8a7e1ef034f9de93f524574f8032b7e281a4725bd6b95ae8bfe417d689d750\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.293503 containerd[1987]: time="2026-03-13T00:41:48.293457552Z" level=error msg="Failed to destroy network for sandbox \"b735908b4c1353bd9b0471094ba773ad56dac70c34677eff7b280d10c9baa33e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.294781 containerd[1987]: time="2026-03-13T00:41:48.294737511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xn5k5,Uid:1d4c1b77-472e-498a-9a86-012ddbcbd0a3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8a7e1ef034f9de93f524574f8032b7e281a4725bd6b95ae8bfe417d689d750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.295390 kubelet[3329]: E0313 00:41:48.295134 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8a7e1ef034f9de93f524574f8032b7e281a4725bd6b95ae8bfe417d689d750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.295390 kubelet[3329]: E0313 00:41:48.295208 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8a7e1ef034f9de93f524574f8032b7e281a4725bd6b95ae8bfe417d689d750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-xn5k5" Mar 13 00:41:48.295390 kubelet[3329]: E0313 00:41:48.295232 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8a7e1ef034f9de93f524574f8032b7e281a4725bd6b95ae8bfe417d689d750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-xn5k5" Mar 13 00:41:48.295681 containerd[1987]: time="2026-03-13T00:41:48.295607022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-7k8xf,Uid:ca7d0db0-4f56-4491-a352-bfabf2912a53,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b735908b4c1353bd9b0471094ba773ad56dac70c34677eff7b280d10c9baa33e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.295846 kubelet[3329]: E0313 00:41:48.295776 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b735908b4c1353bd9b0471094ba773ad56dac70c34677eff7b280d10c9baa33e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.295846 kubelet[3329]: E0313 00:41:48.295815 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b735908b4c1353bd9b0471094ba773ad56dac70c34677eff7b280d10c9baa33e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7dcd7cbd55-7k8xf" Mar 13 00:41:48.295846 kubelet[3329]: E0313 00:41:48.295837 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b735908b4c1353bd9b0471094ba773ad56dac70c34677eff7b280d10c9baa33e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7dcd7cbd55-7k8xf" Mar 13 00:41:48.295985 kubelet[3329]: E0313 00:41:48.295894 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dcd7cbd55-7k8xf_calico-system(ca7d0db0-4f56-4491-a352-bfabf2912a53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dcd7cbd55-7k8xf_calico-system(ca7d0db0-4f56-4491-a352-bfabf2912a53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b735908b4c1353bd9b0471094ba773ad56dac70c34677eff7b280d10c9baa33e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7dcd7cbd55-7k8xf" podUID="ca7d0db0-4f56-4491-a352-bfabf2912a53" Mar 13 00:41:48.296225 kubelet[3329]: E0313 00:41:48.296160 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-xn5k5_kube-system(1d4c1b77-472e-498a-9a86-012ddbcbd0a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-xn5k5_kube-system(1d4c1b77-472e-498a-9a86-012ddbcbd0a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f8a7e1ef034f9de93f524574f8032b7e281a4725bd6b95ae8bfe417d689d750\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-xn5k5" podUID="1d4c1b77-472e-498a-9a86-012ddbcbd0a3" Mar 13 00:41:48.308411 containerd[1987]: time="2026-03-13T00:41:48.308374534Z" level=info msg="StartContainer for \"e077187e42596e4991eb89e4014bfa04e11e1a7e65b80250fbf204a63170df5f\" returns successfully" Mar 13 00:41:48.645401 systemd[1]: Created slice kubepods-besteffort-pod91c42f68_cc08_44d5_8392_1334f9631936.slice - libcontainer container kubepods-besteffort-pod91c42f68_cc08_44d5_8392_1334f9631936.slice. Mar 13 00:41:48.650566 containerd[1987]: time="2026-03-13T00:41:48.650522468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9l6b,Uid:91c42f68-cc08-44d5-8392-1334f9631936,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:48.734394 containerd[1987]: time="2026-03-13T00:41:48.733170975Z" level=error msg="Failed to destroy network for sandbox \"d36bd19a225e7b26efb4c7ed2cd56f688e624bdc53e6f0c05357ae8c0b624719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.738539 containerd[1987]: time="2026-03-13T00:41:48.737472562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9l6b,Uid:91c42f68-cc08-44d5-8392-1334f9631936,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36bd19a225e7b26efb4c7ed2cd56f688e624bdc53e6f0c05357ae8c0b624719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.739085 kubelet[3329]: E0313 00:41:48.738975 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36bd19a225e7b26efb4c7ed2cd56f688e624bdc53e6f0c05357ae8c0b624719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:41:48.739853 kubelet[3329]: E0313 00:41:48.739054 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36bd19a225e7b26efb4c7ed2cd56f688e624bdc53e6f0c05357ae8c0b624719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:48.739853 kubelet[3329]: E0313 00:41:48.739363 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36bd19a225e7b26efb4c7ed2cd56f688e624bdc53e6f0c05357ae8c0b624719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9l6b" Mar 13 00:41:48.739853 kubelet[3329]: E0313 00:41:48.739552 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9l6b_calico-system(91c42f68-cc08-44d5-8392-1334f9631936)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9l6b_calico-system(91c42f68-cc08-44d5-8392-1334f9631936)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d36bd19a225e7b26efb4c7ed2cd56f688e624bdc53e6f0c05357ae8c0b624719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9l6b" podUID="91c42f68-cc08-44d5-8392-1334f9631936" Mar 13 00:41:48.921277 systemd[1]: run-netns-cni\x2ddb8a3478\x2d8e89\x2de6cc\x2d1ebc\x2dd39686fe787f.mount: Deactivated successfully. Mar 13 00:41:48.923134 systemd[1]: run-netns-cni\x2d59082bbc\x2da20e\x2d6f3b\x2d98b9\x2d2ee3dcfcaf9b.mount: Deactivated successfully. Mar 13 00:41:48.923243 systemd[1]: run-netns-cni\x2d3fbb8213\x2d58bd\x2d9815\x2dce1d\x2d8e20d498f483.mount: Deactivated successfully. Mar 13 00:41:48.923317 systemd[1]: run-netns-cni\x2deb141d3c\x2d27bd\x2d90c1\x2dc95f\x2d41ccc9c087cd.mount: Deactivated successfully. Mar 13 00:41:48.923410 systemd[1]: run-netns-cni\x2dcc8bc257\x2d3a5a\x2d493d\x2d9d5b\x2d501edbc02dc1.mount: Deactivated successfully. Mar 13 00:41:48.923481 systemd[1]: run-netns-cni\x2d44a9892b\x2d0f53\x2d9647\x2d20ac\x2d40131ec21308.mount: Deactivated successfully. Mar 13 00:41:48.923549 systemd[1]: run-netns-cni\x2d861afecf\x2da0dd\x2d81e1\x2d6810\x2dc7fa3a6f0ab9.mount: Deactivated successfully. Mar 13 00:41:48.937416 kubelet[3329]: I0313 00:41:48.936585 3329 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:41:48.991609 kubelet[3329]: I0313 00:41:48.989508 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-t9vks" podStartSLOduration=2.478189418 podStartE2EDuration="24.989486957s" podCreationTimestamp="2026-03-13 00:41:24 +0000 UTC" firstStartedPulling="2026-03-13 00:41:25.401273113 +0000 UTC m=+19.937518584" lastFinishedPulling="2026-03-13 00:41:47.912570632 +0000 UTC m=+42.448816123" observedRunningTime="2026-03-13 00:41:48.947518951 +0000 UTC m=+43.483764466" watchObservedRunningTime="2026-03-13 00:41:48.989486957 +0000 UTC m=+43.525732452" Mar 13 00:41:49.852671 kubelet[3329]: I0313 00:41:49.852612 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-ca-bundle\") pod \"924e8261-afd4-493f-949c-c57b7c9fdec4\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " Mar 13 00:41:49.852671 kubelet[3329]: I0313 00:41:49.852673 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-nginx-config\" (UniqueName: \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-nginx-config\") pod \"924e8261-afd4-493f-949c-c57b7c9fdec4\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " Mar 13 00:41:49.853819 kubelet[3329]: I0313 00:41:49.852699 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/924e8261-afd4-493f-949c-c57b7c9fdec4-kube-api-access-7nfrn\" (UniqueName: \"kubernetes.io/projected/924e8261-afd4-493f-949c-c57b7c9fdec4-kube-api-access-7nfrn\") pod \"924e8261-afd4-493f-949c-c57b7c9fdec4\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " Mar 13 00:41:49.853819 kubelet[3329]: I0313 00:41:49.852734 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-backend-key-pair\") pod \"924e8261-afd4-493f-949c-c57b7c9fdec4\" (UID: \"924e8261-afd4-493f-949c-c57b7c9fdec4\") " Mar 13 00:41:49.855894 kubelet[3329]: I0313 00:41:49.855722 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-nginx-config" pod "924e8261-afd4-493f-949c-c57b7c9fdec4" (UID: "924e8261-afd4-493f-949c-c57b7c9fdec4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:41:49.856191 kubelet[3329]: I0313 00:41:49.856156 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-ca-bundle" pod "924e8261-afd4-493f-949c-c57b7c9fdec4" (UID: "924e8261-afd4-493f-949c-c57b7c9fdec4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:41:49.860627 kubelet[3329]: I0313 00:41:49.860540 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924e8261-afd4-493f-949c-c57b7c9fdec4-kube-api-access-7nfrn" pod "924e8261-afd4-493f-949c-c57b7c9fdec4" (UID: "924e8261-afd4-493f-949c-c57b7c9fdec4"). InnerVolumeSpecName "kube-api-access-7nfrn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:41:49.862395 systemd[1]: var-lib-kubelet-pods-924e8261\x2dafd4\x2d493f\x2d949c\x2dc57b7c9fdec4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nfrn.mount: Deactivated successfully. Mar 13 00:41:49.863898 kubelet[3329]: I0313 00:41:49.863741 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-backend-key-pair" pod "924e8261-afd4-493f-949c-c57b7c9fdec4" (UID: "924e8261-afd4-493f-949c-c57b7c9fdec4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:41:49.867254 systemd[1]: var-lib-kubelet-pods-924e8261\x2dafd4\x2d493f\x2d949c\x2dc57b7c9fdec4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:41:49.920588 systemd[1]: Removed slice kubepods-besteffort-pod924e8261_afd4_493f_949c_c57b7c9fdec4.slice - libcontainer container kubepods-besteffort-pod924e8261_afd4_493f_949c_c57b7c9fdec4.slice. Mar 13 00:41:49.953614 kubelet[3329]: I0313 00:41:49.953573 3329 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-nginx-config\") on node \"ip-172-31-30-203\" DevicePath \"\"" Mar 13 00:41:49.953614 kubelet[3329]: I0313 00:41:49.953613 3329 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nfrn\" (UniqueName: \"kubernetes.io/projected/924e8261-afd4-493f-949c-c57b7c9fdec4-kube-api-access-7nfrn\") on node \"ip-172-31-30-203\" DevicePath \"\"" Mar 13 00:41:49.953614 kubelet[3329]: I0313 00:41:49.953626 3329 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-backend-key-pair\") on node \"ip-172-31-30-203\" DevicePath \"\"" Mar 13 00:41:49.954807 kubelet[3329]: I0313 00:41:49.953639 3329 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924e8261-afd4-493f-949c-c57b7c9fdec4-whisker-ca-bundle\") on node \"ip-172-31-30-203\" DevicePath \"\"" Mar 13 00:41:50.065592 systemd[1]: Created slice kubepods-besteffort-pod78068bbd_982a_45b8_a610_1bc6e0b58e53.slice - libcontainer container kubepods-besteffort-pod78068bbd_982a_45b8_a610_1bc6e0b58e53.slice. Mar 13 00:41:50.154797 kubelet[3329]: I0313 00:41:50.154745 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78068bbd-982a-45b8-a610-1bc6e0b58e53-whisker-backend-key-pair\") pod \"whisker-b648bd75b-zbw9k\" (UID: \"78068bbd-982a-45b8-a610-1bc6e0b58e53\") " pod="calico-system/whisker-b648bd75b-zbw9k" Mar 13 00:41:50.154797 kubelet[3329]: I0313 00:41:50.154798 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78068bbd-982a-45b8-a610-1bc6e0b58e53-whisker-ca-bundle\") pod \"whisker-b648bd75b-zbw9k\" (UID: \"78068bbd-982a-45b8-a610-1bc6e0b58e53\") " pod="calico-system/whisker-b648bd75b-zbw9k" Mar 13 00:41:50.155037 kubelet[3329]: I0313 00:41:50.154837 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bfkb\" (UniqueName: \"kubernetes.io/projected/78068bbd-982a-45b8-a610-1bc6e0b58e53-kube-api-access-7bfkb\") pod \"whisker-b648bd75b-zbw9k\" (UID: \"78068bbd-982a-45b8-a610-1bc6e0b58e53\") " pod="calico-system/whisker-b648bd75b-zbw9k" Mar 13 00:41:50.155037 kubelet[3329]: I0313 00:41:50.154878 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/78068bbd-982a-45b8-a610-1bc6e0b58e53-nginx-config\") pod \"whisker-b648bd75b-zbw9k\" (UID: \"78068bbd-982a-45b8-a610-1bc6e0b58e53\") " pod="calico-system/whisker-b648bd75b-zbw9k" Mar 13 00:41:50.373384 containerd[1987]: time="2026-03-13T00:41:50.373301162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b648bd75b-zbw9k,Uid:78068bbd-982a-45b8-a610-1bc6e0b58e53,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:50.636795 systemd-networkd[1838]: cali763c52279c3: Link UP Mar 13 00:41:50.641334 systemd-networkd[1838]: cali763c52279c3: Gained carrier Mar 13 00:41:50.652953 (udev-worker)[4728]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:41:50.655278 containerd[1987]: 2026-03-13 00:41:50.404 [ERROR][4707] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:41:50.655278 containerd[1987]: 2026-03-13 00:41:50.500 [INFO][4707] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0 whisker-b648bd75b- calico-system 78068bbd-982a-45b8-a610-1bc6e0b58e53 904 0 2026-03-13 00:41:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b648bd75b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-203 whisker-b648bd75b-zbw9k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali763c52279c3 [] [] }} ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-" Mar 13 00:41:50.655278 containerd[1987]: 2026-03-13 00:41:50.500 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.655278 containerd[1987]: 2026-03-13 00:41:50.551 [INFO][4718] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" HandleID="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Workload="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.559 [INFO][4718] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" HandleID="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Workload="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-203", "pod":"whisker-b648bd75b-zbw9k", "timestamp":"2026-03-13 00:41:50.551370759 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a8f20)} Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.560 [INFO][4718] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.560 [INFO][4718] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.560 [INFO][4718] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.562 [INFO][4718] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" host="ip-172-31-30-203" Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.567 [INFO][4718] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.572 [INFO][4718] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.575 [INFO][4718] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:41:50.655523 containerd[1987]: 2026-03-13 00:41:50.577 [INFO][4718] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.577 [INFO][4718] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" host="ip-172-31-30-203" Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.579 [INFO][4718] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.586 [INFO][4718] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" host="ip-172-31-30-203" Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.594 [INFO][4718] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.129/26] block=192.168.43.128/26 handle="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" host="ip-172-31-30-203" Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.594 [INFO][4718] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.129/26] handle="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" host="ip-172-31-30-203" Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.595 [INFO][4718] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:41:50.655847 containerd[1987]: 2026-03-13 00:41:50.595 [INFO][4718] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.129/26] IPv6=[] ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" HandleID="k8s-pod-network.3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Workload="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.656088 containerd[1987]: 2026-03-13 00:41:50.599 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0", GenerateName:"whisker-b648bd75b-", Namespace:"calico-system", SelfLink:"", UID:"78068bbd-982a-45b8-a610-1bc6e0b58e53", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b648bd75b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"whisker-b648bd75b-zbw9k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali763c52279c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:41:50.656088 containerd[1987]: 2026-03-13 00:41:50.599 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.129/32] ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.656218 containerd[1987]: 2026-03-13 00:41:50.599 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali763c52279c3 ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.656218 containerd[1987]: 2026-03-13 00:41:50.625 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.656292 containerd[1987]: 2026-03-13 00:41:50.626 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0", GenerateName:"whisker-b648bd75b-", Namespace:"calico-system", SelfLink:"", UID:"78068bbd-982a-45b8-a610-1bc6e0b58e53", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b648bd75b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a", Pod:"whisker-b648bd75b-zbw9k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali763c52279c3", MAC:"02:7e:2d:ab:7e:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:41:50.657354 containerd[1987]: 2026-03-13 00:41:50.649 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" Namespace="calico-system" Pod="whisker-b648bd75b-zbw9k" WorkloadEndpoint="ip--172--31--30--203-k8s-whisker--b648bd75b--zbw9k-eth0" Mar 13 00:41:50.743075 containerd[1987]: time="2026-03-13T00:41:50.742999815Z" level=info msg="connecting to shim 3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a" address="unix:///run/containerd/s/7ba6418611f8f6db233178bba211a6cdc2543d4ee55954004e0dbe27897a73b6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:50.788550 systemd[1]: Started cri-containerd-3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a.scope - libcontainer container 3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a. Mar 13 00:41:50.847460 containerd[1987]: time="2026-03-13T00:41:50.847403615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b648bd75b-zbw9k,Uid:78068bbd-982a-45b8-a610-1bc6e0b58e53,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a\"" Mar 13 00:41:50.851513 containerd[1987]: time="2026-03-13T00:41:50.850767121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:41:51.646151 kubelet[3329]: I0313 00:41:51.646109 3329 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="924e8261-afd4-493f-949c-c57b7c9fdec4" path="/var/lib/kubelet/pods/924e8261-afd4-493f-949c-c57b7c9fdec4/volumes" Mar 13 00:41:51.693681 systemd-networkd[1838]: cali763c52279c3: Gained IPv6LL Mar 13 00:41:53.126651 systemd-networkd[1838]: vxlan.calico: Link UP Mar 13 00:41:53.126661 systemd-networkd[1838]: vxlan.calico: Gained carrier Mar 13 00:41:53.284281 (udev-worker)[4727]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:41:53.480044 containerd[1987]: time="2026-03-13T00:41:53.479688618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:41:53.539449 containerd[1987]: time="2026-03-13T00:41:53.539405070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.553373 containerd[1987]: time="2026-03-13T00:41:53.552849264Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.553803 containerd[1987]: time="2026-03-13T00:41:53.553762528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.561177 containerd[1987]: time="2026-03-13T00:41:53.561098106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.699005927s" Mar 13 00:41:53.561177 containerd[1987]: time="2026-03-13T00:41:53.561184113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:41:53.709577 containerd[1987]: time="2026-03-13T00:41:53.708832136Z" level=info msg="CreateContainer within sandbox \"3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:41:53.733398 containerd[1987]: time="2026-03-13T00:41:53.731501523Z" level=info msg="Container cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:53.793599 containerd[1987]: time="2026-03-13T00:41:53.793509798Z" level=info msg="CreateContainer within sandbox \"3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f\"" Mar 13 00:41:53.794356 containerd[1987]: time="2026-03-13T00:41:53.794239455Z" level=info msg="StartContainer for \"cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f\"" Mar 13 00:41:53.799365 containerd[1987]: time="2026-03-13T00:41:53.798097953Z" level=info msg="connecting to shim cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f" address="unix:///run/containerd/s/7ba6418611f8f6db233178bba211a6cdc2543d4ee55954004e0dbe27897a73b6" protocol=ttrpc version=3 Mar 13 00:41:53.838903 systemd[1]: Started cri-containerd-cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f.scope - libcontainer container cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f. Mar 13 00:41:53.965223 containerd[1987]: time="2026-03-13T00:41:53.965156557Z" level=info msg="StartContainer for \"cf955dd50e9e87dbe94d61790168b217d06c4082176bade5b5abcb5013a7599f\" returns successfully" Mar 13 00:41:53.984781 containerd[1987]: time="2026-03-13T00:41:53.984648553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:41:54.310719 systemd-networkd[1838]: vxlan.calico: Gained IPv6LL Mar 13 00:41:56.095808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779428553.mount: Deactivated successfully. Mar 13 00:41:56.114431 containerd[1987]: time="2026-03-13T00:41:56.114372002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:56.115696 containerd[1987]: time="2026-03-13T00:41:56.115517754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:41:56.116904 containerd[1987]: time="2026-03-13T00:41:56.116865149Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:56.138140 containerd[1987]: time="2026-03-13T00:41:56.138032070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:56.142237 containerd[1987]: time="2026-03-13T00:41:56.141720080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.157005142s" Mar 13 00:41:56.142237 containerd[1987]: time="2026-03-13T00:41:56.141769107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:41:56.151635 containerd[1987]: time="2026-03-13T00:41:56.151582195Z" level=info msg="CreateContainer within sandbox \"3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:41:56.165450 containerd[1987]: time="2026-03-13T00:41:56.164488503Z" level=info msg="Container 7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:56.184927 containerd[1987]: time="2026-03-13T00:41:56.184867693Z" level=info msg="CreateContainer within sandbox \"3ef5dfbecd3b11d76a86a9536c0ae3cbb1eccce2204759c8bffe9ae569098c2a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2\"" Mar 13 00:41:56.186440 containerd[1987]: time="2026-03-13T00:41:56.185848503Z" level=info msg="StartContainer for \"7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2\"" Mar 13 00:41:56.187899 containerd[1987]: time="2026-03-13T00:41:56.187858945Z" level=info msg="connecting to shim 7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2" address="unix:///run/containerd/s/7ba6418611f8f6db233178bba211a6cdc2543d4ee55954004e0dbe27897a73b6" protocol=ttrpc version=3 Mar 13 00:41:56.215580 systemd[1]: Started cri-containerd-7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2.scope - libcontainer container 7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2. Mar 13 00:41:56.275357 containerd[1987]: time="2026-03-13T00:41:56.275298998Z" level=info msg="StartContainer for \"7ffcdc31d172f637187c18369f95169051be38aa03a34b0c04892d4bbd471fb2\" returns successfully" Mar 13 00:41:56.930190 ntpd[2236]: Listen normally on 6 vxlan.calico 192.168.43.128:123 Mar 13 00:41:56.930296 ntpd[2236]: Listen normally on 7 cali763c52279c3 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 13 00:41:56.931420 ntpd[2236]: 13 Mar 00:41:56 ntpd[2236]: Listen normally on 6 vxlan.calico 192.168.43.128:123 Mar 13 00:41:56.931420 ntpd[2236]: 13 Mar 00:41:56 ntpd[2236]: Listen normally on 7 cali763c52279c3 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 13 00:41:56.931420 ntpd[2236]: 13 Mar 00:41:56 ntpd[2236]: Listen normally on 8 vxlan.calico [fe80::64ba:8ff:fe3d:1da9%5]:123 Mar 13 00:41:56.930327 ntpd[2236]: Listen normally on 8 vxlan.calico [fe80::64ba:8ff:fe3d:1da9%5]:123 Mar 13 00:41:59.645029 containerd[1987]: time="2026-03-13T00:41:59.644969569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-7k8xf,Uid:ca7d0db0-4f56-4491-a352-bfabf2912a53,Namespace:calico-system,Attempt:0,}" Mar 13 00:41:59.964917 systemd-networkd[1838]: cali273fd46e37e: Link UP Mar 13 00:41:59.965149 systemd-networkd[1838]: cali273fd46e37e: Gained carrier Mar 13 00:41:59.971518 (udev-worker)[5107]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:41:59.989653 containerd[1987]: 2026-03-13 00:41:59.778 [INFO][5089] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0 calico-apiserver-7dcd7cbd55- calico-system ca7d0db0-4f56-4491-a352-bfabf2912a53 839 0 2026-03-13 00:41:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dcd7cbd55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-203 calico-apiserver-7dcd7cbd55-7k8xf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali273fd46e37e [] [] }} ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-" Mar 13 00:41:59.989653 containerd[1987]: 2026-03-13 00:41:59.779 [INFO][5089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:41:59.989653 containerd[1987]: 2026-03-13 00:41:59.908 [INFO][5100] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" HandleID="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Workload="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.918 [INFO][5100] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" HandleID="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Workload="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c270), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-203", "pod":"calico-apiserver-7dcd7cbd55-7k8xf", "timestamp":"2026-03-13 00:41:59.908012883 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039a420)} Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.918 [INFO][5100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.919 [INFO][5100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.919 [INFO][5100] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.923 [INFO][5100] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" host="ip-172-31-30-203" Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.930 [INFO][5100] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.937 [INFO][5100] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.939 [INFO][5100] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:41:59.991230 containerd[1987]: 2026-03-13 00:41:59.941 [INFO][5100] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:41:59.991844 kubelet[3329]: I0313 00:41:59.987824 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-b648bd75b-zbw9k" podStartSLOduration=5.690953445 podStartE2EDuration="10.985619859s" podCreationTimestamp="2026-03-13 00:41:49 +0000 UTC" firstStartedPulling="2026-03-13 00:41:50.849601914 +0000 UTC m=+45.385847384" lastFinishedPulling="2026-03-13 00:41:56.144268314 +0000 UTC m=+50.680513798" observedRunningTime="2026-03-13 00:41:57.084906328 +0000 UTC m=+51.621151822" watchObservedRunningTime="2026-03-13 00:41:59.985619859 +0000 UTC m=+54.521865352" Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.941 [INFO][5100] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" host="ip-172-31-30-203" Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.943 [INFO][5100] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.950 [INFO][5100] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" host="ip-172-31-30-203" Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.956 [INFO][5100] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.130/26] block=192.168.43.128/26 handle="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" host="ip-172-31-30-203" Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.956 [INFO][5100] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.130/26] handle="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" host="ip-172-31-30-203" Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.956 [INFO][5100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:41:59.992285 containerd[1987]: 2026-03-13 00:41:59.956 [INFO][5100] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.130/26] IPv6=[] ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" HandleID="k8s-pod-network.cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Workload="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:41:59.994540 containerd[1987]: 2026-03-13 00:41:59.959 [INFO][5089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0", GenerateName:"calico-apiserver-7dcd7cbd55-", Namespace:"calico-system", SelfLink:"", UID:"ca7d0db0-4f56-4491-a352-bfabf2912a53", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dcd7cbd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"calico-apiserver-7dcd7cbd55-7k8xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali273fd46e37e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:41:59.994691 containerd[1987]: 2026-03-13 00:41:59.959 [INFO][5089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.130/32] ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:41:59.994691 containerd[1987]: 2026-03-13 00:41:59.959 [INFO][5089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali273fd46e37e ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:41:59.994691 containerd[1987]: 2026-03-13 00:41:59.966 [INFO][5089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:41:59.994982 containerd[1987]: 2026-03-13 00:41:59.966 [INFO][5089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0", GenerateName:"calico-apiserver-7dcd7cbd55-", Namespace:"calico-system", SelfLink:"", UID:"ca7d0db0-4f56-4491-a352-bfabf2912a53", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dcd7cbd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b", Pod:"calico-apiserver-7dcd7cbd55-7k8xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali273fd46e37e", MAC:"fe:25:17:3d:ac:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:41:59.995119 containerd[1987]: 2026-03-13 00:41:59.984 [INFO][5089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-7k8xf" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--7k8xf-eth0" Mar 13 00:42:00.182264 containerd[1987]: time="2026-03-13T00:42:00.182101384Z" level=info msg="connecting to shim cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b" address="unix:///run/containerd/s/34bc2bfc0b3574e761e3d8776ba197452b3b0f21a846d9296d0d034e3a55fff5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:00.226603 systemd[1]: Started cri-containerd-cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b.scope - libcontainer container cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b. Mar 13 00:42:00.291132 containerd[1987]: time="2026-03-13T00:42:00.291084197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-7k8xf,Uid:ca7d0db0-4f56-4491-a352-bfabf2912a53,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b\"" Mar 13 00:42:00.293636 containerd[1987]: time="2026-03-13T00:42:00.293407320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:42:00.654706 containerd[1987]: time="2026-03-13T00:42:00.654663200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9l6b,Uid:91c42f68-cc08-44d5-8392-1334f9631936,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:00.814991 systemd[1]: Started sshd@7-172.31.30.203:22-20.161.92.111:37024.service - OpenSSH per-connection server daemon (20.161.92.111:37024). Mar 13 00:42:00.921143 systemd-networkd[1838]: cali1d1ce3421a2: Link UP Mar 13 00:42:00.924114 systemd-networkd[1838]: cali1d1ce3421a2: Gained carrier Mar 13 00:42:00.952452 containerd[1987]: 2026-03-13 00:42:00.756 [INFO][5185] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0 csi-node-driver- calico-system 91c42f68-cc08-44d5-8392-1334f9631936 687 0 2026-03-13 00:41:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-203 csi-node-driver-s9l6b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1d1ce3421a2 [] [] }} ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-" Mar 13 00:42:00.952452 containerd[1987]: 2026-03-13 00:42:00.756 [INFO][5185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:00.952452 containerd[1987]: 2026-03-13 00:42:00.818 [INFO][5200] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" HandleID="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Workload="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.843 [INFO][5200] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" HandleID="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Workload="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-203", "pod":"csi-node-driver-s9l6b", "timestamp":"2026-03-13 00:42:00.818060048 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003151e0)} Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.844 [INFO][5200] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.844 [INFO][5200] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.844 [INFO][5200] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.853 [INFO][5200] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" host="ip-172-31-30-203" Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.859 [INFO][5200] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.868 [INFO][5200] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.875 [INFO][5200] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:00.952743 containerd[1987]: 2026-03-13 00:42:00.878 [INFO][5200] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.879 [INFO][5200] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" host="ip-172-31-30-203" Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.881 [INFO][5200] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.894 [INFO][5200] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" host="ip-172-31-30-203" Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.905 [INFO][5200] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.131/26] block=192.168.43.128/26 handle="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" host="ip-172-31-30-203" Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.906 [INFO][5200] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.131/26] handle="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" host="ip-172-31-30-203" Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.906 [INFO][5200] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:00.954359 containerd[1987]: 2026-03-13 00:42:00.906 [INFO][5200] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.131/26] IPv6=[] ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" HandleID="k8s-pod-network.575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Workload="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:00.954674 containerd[1987]: 2026-03-13 00:42:00.915 [INFO][5185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91c42f68-cc08-44d5-8392-1334f9631936", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"csi-node-driver-s9l6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d1ce3421a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:00.954783 containerd[1987]: 2026-03-13 00:42:00.915 [INFO][5185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.131/32] ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:00.954783 containerd[1987]: 2026-03-13 00:42:00.915 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d1ce3421a2 ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:00.954783 containerd[1987]: 2026-03-13 00:42:00.924 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:00.954906 containerd[1987]: 2026-03-13 00:42:00.926 [INFO][5185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91c42f68-cc08-44d5-8392-1334f9631936", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f", Pod:"csi-node-driver-s9l6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d1ce3421a2", MAC:"6e:90:c6:fa:08:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:00.955007 containerd[1987]: 2026-03-13 00:42:00.947 [INFO][5185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" Namespace="calico-system" Pod="csi-node-driver-s9l6b" WorkloadEndpoint="ip--172--31--30--203-k8s-csi--node--driver--s9l6b-eth0" Mar 13 00:42:01.001262 containerd[1987]: time="2026-03-13T00:42:01.001122674Z" level=info msg="connecting to shim 575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f" address="unix:///run/containerd/s/a18df1a94c373076f12a67cc8ced6ed8d6f5343407357d2d443efda985016b19" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:01.047620 systemd[1]: Started cri-containerd-575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f.scope - libcontainer container 575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f. Mar 13 00:42:01.091487 containerd[1987]: time="2026-03-13T00:42:01.091450401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9l6b,Uid:91c42f68-cc08-44d5-8392-1334f9631936,Namespace:calico-system,Attempt:0,} returns sandbox id \"575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f\"" Mar 13 00:42:01.383010 sshd[5208]: Accepted publickey for core from 20.161.92.111 port 37024 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:01.389364 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:01.402081 systemd-logind[1966]: New session 8 of user core. Mar 13 00:42:01.414771 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:42:01.685156 containerd[1987]: time="2026-03-13T00:42:01.678910929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466cf6fd-dg9jn,Uid:76c51aee-1eb6-4a51-9e52-4fb871cbe059,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:01.928092 systemd-networkd[1838]: cali273fd46e37e: Gained IPv6LL Mar 13 00:42:02.438632 systemd-networkd[1838]: cali1d1ce3421a2: Gained IPv6LL Mar 13 00:42:02.655314 containerd[1987]: time="2026-03-13T00:42:02.655266206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-52fsp,Uid:5bf3d678-62d5-4720-9956-5f123a7e4d24,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:02.698890 containerd[1987]: time="2026-03-13T00:42:02.698729042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-8kvp5,Uid:a572913e-ac86-4e32-ab1d-883c425c261c,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:02.701919 containerd[1987]: time="2026-03-13T00:42:02.701806958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gb929,Uid:f539e182-c005-4e5f-b423-9765e83f57bd,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:03.016776 systemd-networkd[1838]: calicb9a0bdd0a5: Link UP Mar 13 00:42:03.021042 systemd-networkd[1838]: calicb9a0bdd0a5: Gained carrier Mar 13 00:42:03.135875 containerd[1987]: 2026-03-13 00:42:02.099 [INFO][5281] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0 calico-kube-controllers-59466cf6fd- calico-system 76c51aee-1eb6-4a51-9e52-4fb871cbe059 833 0 2026-03-13 00:41:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59466cf6fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-203 calico-kube-controllers-59466cf6fd-dg9jn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb9a0bdd0a5 [] [] }} ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-" Mar 13 00:42:03.135875 containerd[1987]: 2026-03-13 00:42:02.102 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.135875 containerd[1987]: 2026-03-13 00:42:02.574 [INFO][5297] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" HandleID="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Workload="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.678 [INFO][5297] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" HandleID="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Workload="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-203", "pod":"calico-kube-controllers-59466cf6fd-dg9jn", "timestamp":"2026-03-13 00:42:02.574310821 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000550b00)} Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.678 [INFO][5297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.678 [INFO][5297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.678 [INFO][5297] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.746 [INFO][5297] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" host="ip-172-31-30-203" Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.805 [INFO][5297] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.845 [INFO][5297] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.858 [INFO][5297] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:03.149722 containerd[1987]: 2026-03-13 00:42:02.919 [INFO][5297] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:02.921 [INFO][5297] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" host="ip-172-31-30-203" Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:02.925 [INFO][5297] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:02.959 [INFO][5297] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" host="ip-172-31-30-203" Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:03.001 [INFO][5297] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.132/26] block=192.168.43.128/26 handle="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" host="ip-172-31-30-203" Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:03.001 [INFO][5297] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.132/26] handle="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" host="ip-172-31-30-203" Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:03.001 [INFO][5297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:03.150192 containerd[1987]: 2026-03-13 00:42:03.001 [INFO][5297] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.132/26] IPv6=[] ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" HandleID="k8s-pod-network.4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Workload="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.151014 containerd[1987]: 2026-03-13 00:42:03.009 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0", GenerateName:"calico-kube-controllers-59466cf6fd-", Namespace:"calico-system", SelfLink:"", UID:"76c51aee-1eb6-4a51-9e52-4fb871cbe059", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59466cf6fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"calico-kube-controllers-59466cf6fd-dg9jn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb9a0bdd0a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:03.160743 containerd[1987]: 2026-03-13 00:42:03.011 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.132/32] ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.160743 containerd[1987]: 2026-03-13 00:42:03.011 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb9a0bdd0a5 ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.160743 containerd[1987]: 2026-03-13 00:42:03.019 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.172072 containerd[1987]: 2026-03-13 00:42:03.025 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0", GenerateName:"calico-kube-controllers-59466cf6fd-", Namespace:"calico-system", SelfLink:"", UID:"76c51aee-1eb6-4a51-9e52-4fb871cbe059", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59466cf6fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd", Pod:"calico-kube-controllers-59466cf6fd-dg9jn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb9a0bdd0a5", MAC:"be:86:68:64:35:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:03.172983 containerd[1987]: 2026-03-13 00:42:03.103 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" Namespace="calico-system" Pod="calico-kube-controllers-59466cf6fd-dg9jn" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--kube--controllers--59466cf6fd--dg9jn-eth0" Mar 13 00:42:03.479765 containerd[1987]: time="2026-03-13T00:42:03.479651062Z" level=info msg="connecting to shim 4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd" address="unix:///run/containerd/s/946a3671b7496fa8f7455ff16d6703c7bda4ea8c4af1c0f7861f4996a4287c65" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:03.664294 containerd[1987]: time="2026-03-13T00:42:03.664185103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xn5k5,Uid:1d4c1b77-472e-498a-9a86-012ddbcbd0a3,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:03.712728 systemd[1]: Started cri-containerd-4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd.scope - libcontainer container 4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd. Mar 13 00:42:03.767765 systemd-networkd[1838]: cali9bfcaa0b1f0: Link UP Mar 13 00:42:03.772864 systemd-networkd[1838]: cali9bfcaa0b1f0: Gained carrier Mar 13 00:42:03.875719 containerd[1987]: 2026-03-13 00:42:03.036 [INFO][5328] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0 calico-apiserver-7dcd7cbd55- calico-system a572913e-ac86-4e32-ab1d-883c425c261c 832 0 2026-03-13 00:41:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dcd7cbd55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-203 calico-apiserver-7dcd7cbd55-8kvp5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9bfcaa0b1f0 [] [] }} ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-" Mar 13 00:42:03.875719 containerd[1987]: 2026-03-13 00:42:03.040 [INFO][5328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:03.875719 containerd[1987]: 2026-03-13 00:42:03.466 [INFO][5366] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" HandleID="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Workload="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.541 [INFO][5366] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" HandleID="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Workload="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-203", "pod":"calico-apiserver-7dcd7cbd55-8kvp5", "timestamp":"2026-03-13 00:42:03.466310494 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a8160)} Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.541 [INFO][5366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.541 [INFO][5366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.541 [INFO][5366] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.560 [INFO][5366] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" host="ip-172-31-30-203" Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.576 [INFO][5366] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.620 [INFO][5366] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.633 [INFO][5366] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:03.876759 containerd[1987]: 2026-03-13 00:42:03.654 [INFO][5366] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.655 [INFO][5366] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" host="ip-172-31-30-203" Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.660 [INFO][5366] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.682 [INFO][5366] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" host="ip-172-31-30-203" Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.720 [INFO][5366] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.133/26] block=192.168.43.128/26 handle="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" host="ip-172-31-30-203" Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.722 [INFO][5366] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.133/26] handle="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" host="ip-172-31-30-203" Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.723 [INFO][5366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:03.877144 containerd[1987]: 2026-03-13 00:42:03.724 [INFO][5366] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.133/26] IPv6=[] ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" HandleID="k8s-pod-network.d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Workload="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:03.877443 containerd[1987]: 2026-03-13 00:42:03.741 [INFO][5328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0", GenerateName:"calico-apiserver-7dcd7cbd55-", Namespace:"calico-system", SelfLink:"", UID:"a572913e-ac86-4e32-ab1d-883c425c261c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dcd7cbd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"calico-apiserver-7dcd7cbd55-8kvp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9bfcaa0b1f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:03.877544 containerd[1987]: 2026-03-13 00:42:03.741 [INFO][5328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.133/32] ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:03.877544 containerd[1987]: 2026-03-13 00:42:03.741 [INFO][5328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bfcaa0b1f0 ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:03.877544 containerd[1987]: 2026-03-13 00:42:03.775 [INFO][5328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:03.877662 containerd[1987]: 2026-03-13 00:42:03.787 [INFO][5328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0", GenerateName:"calico-apiserver-7dcd7cbd55-", Namespace:"calico-system", SelfLink:"", UID:"a572913e-ac86-4e32-ab1d-883c425c261c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dcd7cbd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff", Pod:"calico-apiserver-7dcd7cbd55-8kvp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9bfcaa0b1f0", MAC:"e6:51:96:cc:ab:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:03.877764 containerd[1987]: 2026-03-13 00:42:03.826 [INFO][5328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" Namespace="calico-system" Pod="calico-apiserver-7dcd7cbd55-8kvp5" WorkloadEndpoint="ip--172--31--30--203-k8s-calico--apiserver--7dcd7cbd55--8kvp5-eth0" Mar 13 00:42:04.035691 systemd-networkd[1838]: cali952a4952cde: Link UP Mar 13 00:42:04.048515 systemd-networkd[1838]: cali952a4952cde: Gained carrier Mar 13 00:42:04.068775 containerd[1987]: time="2026-03-13T00:42:04.068624439Z" level=info msg="connecting to shim d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff" address="unix:///run/containerd/s/7fe4dda779f9cfffd9a7ef19b995fd7e033dbdd7199aded0f72efb6e00472d3c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:04.146692 containerd[1987]: 2026-03-13 00:42:03.077 [INFO][5320] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0 coredns-7d764666f9- kube-system f539e182-c005-4e5f-b423-9765e83f57bd 830 0 2026-03-13 00:41:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-203 coredns-7d764666f9-gb929 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali952a4952cde [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-" Mar 13 00:42:04.146692 containerd[1987]: 2026-03-13 00:42:03.077 [INFO][5320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.146692 containerd[1987]: 2026-03-13 00:42:03.480 [INFO][5365] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" HandleID="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Workload="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.554 [INFO][5365] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" HandleID="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Workload="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f220), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-203", "pod":"coredns-7d764666f9-gb929", "timestamp":"2026-03-13 00:42:03.480210301 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f5a20)} Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.554 [INFO][5365] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.725 [INFO][5365] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.725 [INFO][5365] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.759 [INFO][5365] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" host="ip-172-31-30-203" Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.789 [INFO][5365] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.828 [INFO][5365] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.856 [INFO][5365] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.147167 containerd[1987]: 2026-03-13 00:42:03.882 [INFO][5365] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.882 [INFO][5365] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" host="ip-172-31-30-203" Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.894 [INFO][5365] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.920 [INFO][5365] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" host="ip-172-31-30-203" Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.973 [INFO][5365] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.134/26] block=192.168.43.128/26 handle="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" host="ip-172-31-30-203" Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.973 [INFO][5365] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.134/26] handle="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" host="ip-172-31-30-203" Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.978 [INFO][5365] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:04.147758 containerd[1987]: 2026-03-13 00:42:03.979 [INFO][5365] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.134/26] IPv6=[] ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" HandleID="k8s-pod-network.f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Workload="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.148307 containerd[1987]: 2026-03-13 00:42:04.002 [INFO][5320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f539e182-c005-4e5f-b423-9765e83f57bd", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"coredns-7d764666f9-gb929", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali952a4952cde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:04.148307 containerd[1987]: 2026-03-13 00:42:04.003 [INFO][5320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.134/32] ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.148307 containerd[1987]: 2026-03-13 00:42:04.003 [INFO][5320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali952a4952cde ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.148307 containerd[1987]: 2026-03-13 00:42:04.050 [INFO][5320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.148307 containerd[1987]: 2026-03-13 00:42:04.052 [INFO][5320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f539e182-c005-4e5f-b423-9765e83f57bd", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e", Pod:"coredns-7d764666f9-gb929", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali952a4952cde", MAC:"82:c8:0b:49:55:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:04.148307 containerd[1987]: 2026-03-13 00:42:04.107 [INFO][5320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" Namespace="kube-system" Pod="coredns-7d764666f9-gb929" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--gb929-eth0" Mar 13 00:42:04.243842 systemd-networkd[1838]: cali0e36613158f: Link UP Mar 13 00:42:04.244584 systemd-networkd[1838]: cali0e36613158f: Gained carrier Mar 13 00:42:04.283607 systemd[1]: Started cri-containerd-d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff.scope - libcontainer container d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff. Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.071 [INFO][5308] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0 goldmane-9f7667bb8- calico-system 5bf3d678-62d5-4720-9956-5f123a7e4d24 842 0 2026-03-13 00:41:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-203 goldmane-9f7667bb8-52fsp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0e36613158f [] [] }} ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.071 [INFO][5308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.511 [INFO][5354] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" HandleID="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Workload="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.559 [INFO][5354] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" HandleID="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Workload="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7b20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-203", "pod":"goldmane-9f7667bb8-52fsp", "timestamp":"2026-03-13 00:42:03.511009406 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000332420)} Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.559 [INFO][5354] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.974 [INFO][5354] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.974 [INFO][5354] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:03.980 [INFO][5354] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.001 [INFO][5354] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.053 [INFO][5354] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.064 [INFO][5354] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.100 [INFO][5354] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.104 [INFO][5354] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.120 [INFO][5354] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437 Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.135 [INFO][5354] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.188 [INFO][5354] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.135/26] block=192.168.43.128/26 handle="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.188 [INFO][5354] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.135/26] handle="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" host="ip-172-31-30-203" Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.188 [INFO][5354] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:04.338628 containerd[1987]: 2026-03-13 00:42:04.188 [INFO][5354] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.135/26] IPv6=[] ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" HandleID="k8s-pod-network.6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Workload="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.343029 containerd[1987]: 2026-03-13 00:42:04.210 [INFO][5308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"5bf3d678-62d5-4720-9956-5f123a7e4d24", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"goldmane-9f7667bb8-52fsp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e36613158f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:04.343029 containerd[1987]: 2026-03-13 00:42:04.210 [INFO][5308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.135/32] ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.343029 containerd[1987]: 2026-03-13 00:42:04.210 [INFO][5308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e36613158f ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.343029 containerd[1987]: 2026-03-13 00:42:04.255 [INFO][5308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.343029 containerd[1987]: 2026-03-13 00:42:04.265 [INFO][5308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"5bf3d678-62d5-4720-9956-5f123a7e4d24", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437", Pod:"goldmane-9f7667bb8-52fsp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e36613158f", MAC:"ba:35:86:58:2e:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:04.343029 containerd[1987]: 2026-03-13 00:42:04.304 [INFO][5308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" Namespace="calico-system" Pod="goldmane-9f7667bb8-52fsp" WorkloadEndpoint="ip--172--31--30--203-k8s-goldmane--9f7667bb8--52fsp-eth0" Mar 13 00:42:04.340031 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:04.346627 sshd[5276]: Connection closed by 20.161.92.111 port 37024 Mar 13 00:42:04.359773 systemd-logind[1966]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:42:04.359923 systemd[1]: sshd@7-172.31.30.203:22-20.161.92.111:37024.service: Deactivated successfully. Mar 13 00:42:04.367722 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:42:04.378706 containerd[1987]: time="2026-03-13T00:42:04.377460293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466cf6fd-dg9jn,Uid:76c51aee-1eb6-4a51-9e52-4fb871cbe059,Namespace:calico-system,Attempt:0,} returns sandbox id \"4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd\"" Mar 13 00:42:04.377697 systemd-logind[1966]: Removed session 8. Mar 13 00:42:04.421791 containerd[1987]: time="2026-03-13T00:42:04.421738016Z" level=info msg="connecting to shim f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e" address="unix:///run/containerd/s/49d22a08521d4db3156e3a66711d4f7d62c5813996603231e14f520abd013daf" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:04.478159 containerd[1987]: time="2026-03-13T00:42:04.477728839Z" level=info msg="connecting to shim 6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437" address="unix:///run/containerd/s/7b155aac3007e4a90f63a461ca0c0f3acf8fc5c1ea120d5d01dc1e085b5f3985" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:04.570583 systemd[1]: Started cri-containerd-f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e.scope - libcontainer container f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e. Mar 13 00:42:04.659956 systemd[1]: Started cri-containerd-6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437.scope - libcontainer container 6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437. Mar 13 00:42:04.679272 containerd[1987]: time="2026-03-13T00:42:04.679226829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dcd7cbd55-8kvp5,Uid:a572913e-ac86-4e32-ab1d-883c425c261c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff\"" Mar 13 00:42:04.752569 systemd-networkd[1838]: calif52b5f00261: Link UP Mar 13 00:42:04.754161 systemd-networkd[1838]: calif52b5f00261: Gained carrier Mar 13 00:42:04.765103 containerd[1987]: time="2026-03-13T00:42:04.765061181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gb929,Uid:f539e182-c005-4e5f-b423-9765e83f57bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e\"" Mar 13 00:42:04.777642 containerd[1987]: time="2026-03-13T00:42:04.777578631Z" level=info msg="CreateContainer within sandbox \"f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.147 [INFO][5423] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0 coredns-7d764666f9- kube-system 1d4c1b77-472e-498a-9a86-012ddbcbd0a3 831 0 2026-03-13 00:41:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-203 coredns-7d764666f9-xn5k5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif52b5f00261 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.151 [INFO][5423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.524 [INFO][5499] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" HandleID="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Workload="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.573 [INFO][5499] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" HandleID="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Workload="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-203", "pod":"coredns-7d764666f9-xn5k5", "timestamp":"2026-03-13 00:42:04.52472756 +0000 UTC"}, Hostname:"ip-172-31-30-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f4dc0)} Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.573 [INFO][5499] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.573 [INFO][5499] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.573 [INFO][5499] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-203' Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.587 [INFO][5499] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.607 [INFO][5499] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.640 [INFO][5499] ipam/ipam.go 526: Trying affinity for 192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.647 [INFO][5499] ipam/ipam.go 160: Attempting to load block cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.661 [INFO][5499] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.661 [INFO][5499] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.666 [INFO][5499] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6 Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.677 [INFO][5499] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.711 [INFO][5499] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.43.136/26] block=192.168.43.128/26 handle="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.711 [INFO][5499] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.43.136/26] handle="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" host="ip-172-31-30-203" Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.711 [INFO][5499] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:04.807877 containerd[1987]: 2026-03-13 00:42:04.712 [INFO][5499] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.43.136/26] IPv6=[] ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" HandleID="k8s-pod-network.bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Workload="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.808775 containerd[1987]: 2026-03-13 00:42:04.737 [INFO][5423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1d4c1b77-472e-498a-9a86-012ddbcbd0a3", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"", Pod:"coredns-7d764666f9-xn5k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif52b5f00261", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:04.808775 containerd[1987]: 2026-03-13 00:42:04.737 [INFO][5423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.136/32] ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.808775 containerd[1987]: 2026-03-13 00:42:04.737 [INFO][5423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif52b5f00261 ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.808775 containerd[1987]: 2026-03-13 00:42:04.762 [INFO][5423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.808775 containerd[1987]: 2026-03-13 00:42:04.763 [INFO][5423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1d4c1b77-472e-498a-9a86-012ddbcbd0a3", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-203", ContainerID:"bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6", Pod:"coredns-7d764666f9-xn5k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif52b5f00261", MAC:"02:6f:87:99:54:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:04.808775 containerd[1987]: 2026-03-13 00:42:04.781 [INFO][5423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" Namespace="kube-system" Pod="coredns-7d764666f9-xn5k5" WorkloadEndpoint="ip--172--31--30--203-k8s-coredns--7d764666f9--xn5k5-eth0" Mar 13 00:42:04.870539 systemd-networkd[1838]: calicb9a0bdd0a5: Gained IPv6LL Mar 13 00:42:04.897093 containerd[1987]: time="2026-03-13T00:42:04.896907675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-52fsp,Uid:5bf3d678-62d5-4720-9956-5f123a7e4d24,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437\"" Mar 13 00:42:04.899270 containerd[1987]: time="2026-03-13T00:42:04.897455747Z" level=info msg="Container 952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:04.903303 containerd[1987]: time="2026-03-13T00:42:04.903257834Z" level=info msg="connecting to shim bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6" address="unix:///run/containerd/s/10c7a5112f6ae0eb05b6a293e12011da2e00a8154acac79ef488e5a7fcf0ae7f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:04.920425 containerd[1987]: time="2026-03-13T00:42:04.919321137Z" level=info msg="CreateContainer within sandbox \"f381bbacedd047f47eda62416d978e74ba49e4c98f1443f03de60d59abb59f0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d\"" Mar 13 00:42:04.923204 containerd[1987]: time="2026-03-13T00:42:04.923161588Z" level=info msg="StartContainer for \"952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d\"" Mar 13 00:42:04.936397 containerd[1987]: time="2026-03-13T00:42:04.935856988Z" level=info msg="connecting to shim 952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d" address="unix:///run/containerd/s/49d22a08521d4db3156e3a66711d4f7d62c5813996603231e14f520abd013daf" protocol=ttrpc version=3 Mar 13 00:42:04.994810 systemd[1]: Started cri-containerd-952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d.scope - libcontainer container 952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d. Mar 13 00:42:05.007662 systemd[1]: Started cri-containerd-bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6.scope - libcontainer container bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6. Mar 13 00:42:05.105513 containerd[1987]: time="2026-03-13T00:42:05.105468176Z" level=info msg="StartContainer for \"952c0cdbf3a2c1023968180bda79ff76468b9d486000178b07e664a7fb595b5d\" returns successfully" Mar 13 00:42:05.145220 containerd[1987]: time="2026-03-13T00:42:05.145172265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xn5k5,Uid:1d4c1b77-472e-498a-9a86-012ddbcbd0a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6\"" Mar 13 00:42:05.153183 containerd[1987]: time="2026-03-13T00:42:05.153139897Z" level=info msg="CreateContainer within sandbox \"bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:42:05.168274 containerd[1987]: time="2026-03-13T00:42:05.168228079Z" level=info msg="Container 0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:05.185814 containerd[1987]: time="2026-03-13T00:42:05.185699673Z" level=info msg="CreateContainer within sandbox \"bbdbaba93725efacf31696a38ca28ff8a56341d0fd090f1201dc9d932433bac6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe\"" Mar 13 00:42:05.187806 containerd[1987]: time="2026-03-13T00:42:05.187736501Z" level=info msg="StartContainer for \"0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe\"" Mar 13 00:42:05.190640 containerd[1987]: time="2026-03-13T00:42:05.190601334Z" level=info msg="connecting to shim 0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe" address="unix:///run/containerd/s/10c7a5112f6ae0eb05b6a293e12011da2e00a8154acac79ef488e5a7fcf0ae7f" protocol=ttrpc version=3 Mar 13 00:42:05.240721 systemd[1]: Started cri-containerd-0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe.scope - libcontainer container 0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe. Mar 13 00:42:05.358010 containerd[1987]: time="2026-03-13T00:42:05.357965949Z" level=info msg="StartContainer for \"0e1c21f480cc5cd7f39b3b32fcb642226d50a4f68ac42f30740c192755cd1afe\" returns successfully" Mar 13 00:42:05.382598 systemd-networkd[1838]: cali952a4952cde: Gained IPv6LL Mar 13 00:42:05.510521 systemd-networkd[1838]: cali0e36613158f: Gained IPv6LL Mar 13 00:42:05.704312 systemd-networkd[1838]: cali9bfcaa0b1f0: Gained IPv6LL Mar 13 00:42:05.895218 systemd-networkd[1838]: calif52b5f00261: Gained IPv6LL Mar 13 00:42:06.371678 kubelet[3329]: I0313 00:42:06.371481 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gb929" podStartSLOduration=56.371458713 podStartE2EDuration="56.371458713s" podCreationTimestamp="2026-03-13 00:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:05.348143034 +0000 UTC m=+59.884388527" watchObservedRunningTime="2026-03-13 00:42:06.371458713 +0000 UTC m=+60.907704208" Mar 13 00:42:06.372856 kubelet[3329]: I0313 00:42:06.372781 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xn5k5" podStartSLOduration=56.372764532 podStartE2EDuration="56.372764532s" podCreationTimestamp="2026-03-13 00:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:06.370377347 +0000 UTC m=+60.906622840" watchObservedRunningTime="2026-03-13 00:42:06.372764532 +0000 UTC m=+60.909010024" Mar 13 00:42:06.540467 containerd[1987]: time="2026-03-13T00:42:06.539729663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:06.541430 containerd[1987]: time="2026-03-13T00:42:06.541158284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:42:06.544396 containerd[1987]: time="2026-03-13T00:42:06.544252912Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:06.548185 containerd[1987]: time="2026-03-13T00:42:06.548116396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:06.550042 containerd[1987]: time="2026-03-13T00:42:06.549836346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.256198124s" Mar 13 00:42:06.550042 containerd[1987]: time="2026-03-13T00:42:06.549887140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:42:06.554324 containerd[1987]: time="2026-03-13T00:42:06.553397992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:42:06.564923 containerd[1987]: time="2026-03-13T00:42:06.564877729Z" level=info msg="CreateContainer within sandbox \"cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:42:06.576364 containerd[1987]: time="2026-03-13T00:42:06.574131849Z" level=info msg="Container ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:06.590549 containerd[1987]: time="2026-03-13T00:42:06.590498331Z" level=info msg="CreateContainer within sandbox \"cd765525426c118da02250e4f95dffbe273ded791a3ae95dd81864694178038b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3\"" Mar 13 00:42:06.608517 containerd[1987]: time="2026-03-13T00:42:06.608412333Z" level=info msg="StartContainer for \"ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3\"" Mar 13 00:42:06.610224 containerd[1987]: time="2026-03-13T00:42:06.610180586Z" level=info msg="connecting to shim ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3" address="unix:///run/containerd/s/34bc2bfc0b3574e761e3d8776ba197452b3b0f21a846d9296d0d034e3a55fff5" protocol=ttrpc version=3 Mar 13 00:42:06.655599 systemd[1]: Started cri-containerd-ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3.scope - libcontainer container ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3. Mar 13 00:42:06.718381 containerd[1987]: time="2026-03-13T00:42:06.718309098Z" level=info msg="StartContainer for \"ea1dbb0b86de2cad502a13eadd6f2376240a10b4db62ed4cd27a450bb610dfb3\" returns successfully" Mar 13 00:42:07.930195 ntpd[2236]: Listen normally on 9 cali273fd46e37e [fe80::ecee:eeff:feee:eeee%8]:123 Mar 13 00:42:07.930254 ntpd[2236]: Listen normally on 10 cali1d1ce3421a2 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 13 00:42:07.930751 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 9 cali273fd46e37e [fe80::ecee:eeff:feee:eeee%8]:123 Mar 13 00:42:07.930751 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 10 cali1d1ce3421a2 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 13 00:42:07.930751 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 11 calicb9a0bdd0a5 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 13 00:42:07.930751 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 12 cali9bfcaa0b1f0 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 13 00:42:07.930280 ntpd[2236]: Listen normally on 11 calicb9a0bdd0a5 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 13 00:42:07.930970 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 13 cali952a4952cde [fe80::ecee:eeff:feee:eeee%12]:123 Mar 13 00:42:07.930970 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 14 cali0e36613158f [fe80::ecee:eeff:feee:eeee%13]:123 Mar 13 00:42:07.930970 ntpd[2236]: 13 Mar 00:42:07 ntpd[2236]: Listen normally on 15 calif52b5f00261 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 13 00:42:07.930307 ntpd[2236]: Listen normally on 12 cali9bfcaa0b1f0 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 13 00:42:07.930332 ntpd[2236]: Listen normally on 13 cali952a4952cde [fe80::ecee:eeff:feee:eeee%12]:123 Mar 13 00:42:07.930812 ntpd[2236]: Listen normally on 14 cali0e36613158f [fe80::ecee:eeff:feee:eeee%13]:123 Mar 13 00:42:07.930846 ntpd[2236]: Listen normally on 15 calif52b5f00261 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 13 00:42:07.944243 containerd[1987]: time="2026-03-13T00:42:07.944153713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:07.945448 containerd[1987]: time="2026-03-13T00:42:07.945402616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:42:07.946600 containerd[1987]: time="2026-03-13T00:42:07.946538316Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:07.950652 containerd[1987]: time="2026-03-13T00:42:07.950587671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:07.951501 containerd[1987]: time="2026-03-13T00:42:07.951365639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.397918312s" Mar 13 00:42:07.951501 containerd[1987]: time="2026-03-13T00:42:07.951404407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:42:07.953334 containerd[1987]: time="2026-03-13T00:42:07.953306734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:42:07.958958 containerd[1987]: time="2026-03-13T00:42:07.958100091Z" level=info msg="CreateContainer within sandbox \"575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:42:07.981712 containerd[1987]: time="2026-03-13T00:42:07.981667127Z" level=info msg="Container 8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:07.995639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143926178.mount: Deactivated successfully. Mar 13 00:42:08.001371 containerd[1987]: time="2026-03-13T00:42:08.001306111Z" level=info msg="CreateContainer within sandbox \"575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67\"" Mar 13 00:42:08.004009 containerd[1987]: time="2026-03-13T00:42:08.003959859Z" level=info msg="StartContainer for \"8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67\"" Mar 13 00:42:08.009738 containerd[1987]: time="2026-03-13T00:42:08.009683318Z" level=info msg="connecting to shim 8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67" address="unix:///run/containerd/s/a18df1a94c373076f12a67cc8ced6ed8d6f5343407357d2d443efda985016b19" protocol=ttrpc version=3 Mar 13 00:42:08.066519 systemd[1]: Started cri-containerd-8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67.scope - libcontainer container 8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67. Mar 13 00:42:08.176565 containerd[1987]: time="2026-03-13T00:42:08.176522906Z" level=info msg="StartContainer for \"8a9def5296c47714de3db7623cf6525831b177186504b81fd25e9b2fff99bf67\" returns successfully" Mar 13 00:42:08.211322 kubelet[3329]: I0313 00:42:08.210871 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7dcd7cbd55-7k8xf" podStartSLOduration=38.951637203 podStartE2EDuration="45.21085051s" podCreationTimestamp="2026-03-13 00:41:23 +0000 UTC" firstStartedPulling="2026-03-13 00:42:00.292859721 +0000 UTC m=+54.829105192" lastFinishedPulling="2026-03-13 00:42:06.552073016 +0000 UTC m=+61.088318499" observedRunningTime="2026-03-13 00:42:07.389645257 +0000 UTC m=+61.925890747" watchObservedRunningTime="2026-03-13 00:42:08.21085051 +0000 UTC m=+62.747096002" Mar 13 00:42:09.434074 systemd[1]: Started sshd@8-172.31.30.203:22-20.161.92.111:37032.service - OpenSSH per-connection server daemon (20.161.92.111:37032). Mar 13 00:42:09.955062 sshd[5874]: Accepted publickey for core from 20.161.92.111 port 37032 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:09.960149 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:09.971039 systemd-logind[1966]: New session 9 of user core. Mar 13 00:42:09.976563 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:42:10.912659 sshd[5889]: Connection closed by 20.161.92.111 port 37032 Mar 13 00:42:10.914648 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:10.919259 systemd[1]: sshd@8-172.31.30.203:22-20.161.92.111:37032.service: Deactivated successfully. Mar 13 00:42:10.923493 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:42:10.925641 systemd-logind[1966]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:42:10.928238 systemd-logind[1966]: Removed session 9. Mar 13 00:42:13.196891 containerd[1987]: time="2026-03-13T00:42:13.196830615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:13.200033 containerd[1987]: time="2026-03-13T00:42:13.199853657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:42:13.255360 containerd[1987]: time="2026-03-13T00:42:13.254019357Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:13.284852 containerd[1987]: time="2026-03-13T00:42:13.284785264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:13.293527 containerd[1987]: time="2026-03-13T00:42:13.293457496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.332471834s" Mar 13 00:42:13.293527 containerd[1987]: time="2026-03-13T00:42:13.293522045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:42:13.310254 containerd[1987]: time="2026-03-13T00:42:13.309868275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:42:13.388424 containerd[1987]: time="2026-03-13T00:42:13.387868710Z" level=info msg="CreateContainer within sandbox \"4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:42:13.404768 containerd[1987]: time="2026-03-13T00:42:13.404722511Z" level=info msg="Container 51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:13.418541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397162420.mount: Deactivated successfully. Mar 13 00:42:13.462039 containerd[1987]: time="2026-03-13T00:42:13.461909637Z" level=info msg="CreateContainer within sandbox \"4890da6492c71792fadc5364f56d2ed37b556b38cb3fcf9361c8008e985617dd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4\"" Mar 13 00:42:13.464807 containerd[1987]: time="2026-03-13T00:42:13.464512207Z" level=info msg="StartContainer for \"51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4\"" Mar 13 00:42:13.467263 containerd[1987]: time="2026-03-13T00:42:13.467223349Z" level=info msg="connecting to shim 51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4" address="unix:///run/containerd/s/946a3671b7496fa8f7455ff16d6703c7bda4ea8c4af1c0f7861f4996a4287c65" protocol=ttrpc version=3 Mar 13 00:42:13.550696 systemd[1]: Started cri-containerd-51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4.scope - libcontainer container 51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4. Mar 13 00:42:13.682588 containerd[1987]: time="2026-03-13T00:42:13.682541390Z" level=info msg="StartContainer for \"51947122a6ba8d6caddaf682b40b52e8fa4f74243bd394bfd796ff1d13e714a4\" returns successfully" Mar 13 00:42:13.687897 containerd[1987]: time="2026-03-13T00:42:13.687853583Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:13.690030 containerd[1987]: time="2026-03-13T00:42:13.689921637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:42:13.694378 containerd[1987]: time="2026-03-13T00:42:13.694312294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 384.399559ms" Mar 13 00:42:13.694378 containerd[1987]: time="2026-03-13T00:42:13.694380274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:42:13.696284 containerd[1987]: time="2026-03-13T00:42:13.696252300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:42:13.756386 containerd[1987]: time="2026-03-13T00:42:13.755849032Z" level=info msg="CreateContainer within sandbox \"d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:42:13.853865 containerd[1987]: time="2026-03-13T00:42:13.853824334Z" level=info msg="Container 08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:13.885815 containerd[1987]: time="2026-03-13T00:42:13.885770532Z" level=info msg="CreateContainer within sandbox \"d64d93b5954a13653ac65f74eeb0d147391288910bbfc7a8f99c3e74cc6d25ff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd\"" Mar 13 00:42:13.887068 containerd[1987]: time="2026-03-13T00:42:13.886664065Z" level=info msg="StartContainer for \"08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd\"" Mar 13 00:42:13.888705 containerd[1987]: time="2026-03-13T00:42:13.888626665Z" level=info msg="connecting to shim 08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd" address="unix:///run/containerd/s/7fe4dda779f9cfffd9a7ef19b995fd7e033dbdd7199aded0f72efb6e00472d3c" protocol=ttrpc version=3 Mar 13 00:42:13.936571 systemd[1]: Started cri-containerd-08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd.scope - libcontainer container 08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd. Mar 13 00:42:14.033436 containerd[1987]: time="2026-03-13T00:42:14.033095148Z" level=info msg="StartContainer for \"08980866edb693c09bd507a7c44710d068fb29c050a41704ee9f833a73985dfd\" returns successfully" Mar 13 00:42:14.594572 kubelet[3329]: I0313 00:42:14.594303 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59466cf6fd-dg9jn" podStartSLOduration=40.632109522 podStartE2EDuration="49.542389993s" podCreationTimestamp="2026-03-13 00:41:25 +0000 UTC" firstStartedPulling="2026-03-13 00:42:04.395499792 +0000 UTC m=+58.931846444" lastFinishedPulling="2026-03-13 00:42:13.30588143 +0000 UTC m=+67.842126915" observedRunningTime="2026-03-13 00:42:14.499039798 +0000 UTC m=+69.035285298" watchObservedRunningTime="2026-03-13 00:42:14.542389993 +0000 UTC m=+69.078635487" Mar 13 00:42:14.597972 kubelet[3329]: I0313 00:42:14.594746 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7dcd7cbd55-8kvp5" podStartSLOduration=42.581896887 podStartE2EDuration="51.594731225s" podCreationTimestamp="2026-03-13 00:41:23 +0000 UTC" firstStartedPulling="2026-03-13 00:42:04.682719618 +0000 UTC m=+59.218965111" lastFinishedPulling="2026-03-13 00:42:13.695553965 +0000 UTC m=+68.231799449" observedRunningTime="2026-03-13 00:42:14.538226823 +0000 UTC m=+69.074472317" watchObservedRunningTime="2026-03-13 00:42:14.594731225 +0000 UTC m=+69.130976719" Mar 13 00:42:16.006333 systemd[1]: Started sshd@9-172.31.30.203:22-20.161.92.111:42484.service - OpenSSH per-connection server daemon (20.161.92.111:42484). Mar 13 00:42:16.592690 sshd[6026]: Accepted publickey for core from 20.161.92.111 port 42484 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:16.596542 sshd-session[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:16.604713 systemd-logind[1966]: New session 10 of user core. Mar 13 00:42:16.612606 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:42:17.587334 sshd[6033]: Connection closed by 20.161.92.111 port 42484 Mar 13 00:42:17.588556 sshd-session[6026]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:17.610605 systemd[1]: sshd@9-172.31.30.203:22-20.161.92.111:42484.service: Deactivated successfully. Mar 13 00:42:17.617622 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:42:17.622591 systemd-logind[1966]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:42:17.627752 systemd-logind[1966]: Removed session 10. Mar 13 00:42:17.687462 systemd[1]: Started sshd@10-172.31.30.203:22-20.161.92.111:42490.service - OpenSSH per-connection server daemon (20.161.92.111:42490). Mar 13 00:42:18.159308 sshd[6048]: Accepted publickey for core from 20.161.92.111 port 42490 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:18.160992 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:18.169872 systemd-logind[1966]: New session 11 of user core. Mar 13 00:42:18.179639 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:42:18.881387 sshd[6068]: Connection closed by 20.161.92.111 port 42490 Mar 13 00:42:18.883887 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:18.893142 systemd-logind[1966]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:42:18.893890 systemd[1]: sshd@10-172.31.30.203:22-20.161.92.111:42490.service: Deactivated successfully. Mar 13 00:42:18.900059 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:42:18.908556 systemd-logind[1966]: Removed session 11. Mar 13 00:42:18.972933 systemd[1]: Started sshd@11-172.31.30.203:22-20.161.92.111:42494.service - OpenSSH per-connection server daemon (20.161.92.111:42494). Mar 13 00:42:19.441059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount305606904.mount: Deactivated successfully. Mar 13 00:42:19.500757 sshd[6084]: Accepted publickey for core from 20.161.92.111 port 42494 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:19.512524 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:19.522678 systemd-logind[1966]: New session 12 of user core. Mar 13 00:42:19.531565 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:42:19.990467 sshd[6093]: Connection closed by 20.161.92.111 port 42494 Mar 13 00:42:19.988479 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:19.997242 systemd[1]: sshd@11-172.31.30.203:22-20.161.92.111:42494.service: Deactivated successfully. Mar 13 00:42:20.003284 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:42:20.011220 systemd-logind[1966]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:42:20.014091 systemd-logind[1966]: Removed session 12. Mar 13 00:42:20.655857 containerd[1987]: time="2026-03-13T00:42:20.655793113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:20.664364 containerd[1987]: time="2026-03-13T00:42:20.658241245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:42:20.848661 containerd[1987]: time="2026-03-13T00:42:20.848561220Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:20.859032 containerd[1987]: time="2026-03-13T00:42:20.858975937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:20.864985 containerd[1987]: time="2026-03-13T00:42:20.864933620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 7.165310858s" Mar 13 00:42:20.864985 containerd[1987]: time="2026-03-13T00:42:20.864989820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:42:21.032755 containerd[1987]: time="2026-03-13T00:42:21.032047860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:42:21.319910 containerd[1987]: time="2026-03-13T00:42:21.319421530Z" level=info msg="CreateContainer within sandbox \"6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:42:21.362141 containerd[1987]: time="2026-03-13T00:42:21.361017372Z" level=info msg="Container 29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:21.408780 containerd[1987]: time="2026-03-13T00:42:21.408725782Z" level=info msg="CreateContainer within sandbox \"6d82c14f6d774138ad6153a293286fa1af689816986978eab98763eeec587437\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff\"" Mar 13 00:42:21.418817 containerd[1987]: time="2026-03-13T00:42:21.418762499Z" level=info msg="StartContainer for \"29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff\"" Mar 13 00:42:21.432695 containerd[1987]: time="2026-03-13T00:42:21.432598790Z" level=info msg="connecting to shim 29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff" address="unix:///run/containerd/s/7b155aac3007e4a90f63a461ca0c0f3acf8fc5c1ea120d5d01dc1e085b5f3985" protocol=ttrpc version=3 Mar 13 00:42:21.591611 systemd[1]: Started cri-containerd-29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff.scope - libcontainer container 29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff. Mar 13 00:42:21.740447 containerd[1987]: time="2026-03-13T00:42:21.740412622Z" level=info msg="StartContainer for \"29f2f7f18b8d242bb4bdbc3f609411265899b13b9b4b8d996c9977342ba59aff\" returns successfully" Mar 13 00:42:22.669766 containerd[1987]: time="2026-03-13T00:42:22.669707834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:22.674402 containerd[1987]: time="2026-03-13T00:42:22.674153785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:42:22.674402 containerd[1987]: time="2026-03-13T00:42:22.674242052Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:22.678632 containerd[1987]: time="2026-03-13T00:42:22.678562231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:22.679439 containerd[1987]: time="2026-03-13T00:42:22.679385435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.647292273s" Mar 13 00:42:22.679439 containerd[1987]: time="2026-03-13T00:42:22.679430305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:42:22.750733 containerd[1987]: time="2026-03-13T00:42:22.750679992Z" level=info msg="CreateContainer within sandbox \"575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:42:22.805995 containerd[1987]: time="2026-03-13T00:42:22.805946480Z" level=info msg="Container 67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:22.835096 containerd[1987]: time="2026-03-13T00:42:22.835002302Z" level=info msg="CreateContainer within sandbox \"575da387dc7d2fc9025e67e5c1402c28c966dad63d6809708be054ce7eea671f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750\"" Mar 13 00:42:22.857782 containerd[1987]: time="2026-03-13T00:42:22.857656063Z" level=info msg="StartContainer for \"67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750\"" Mar 13 00:42:22.861034 containerd[1987]: time="2026-03-13T00:42:22.860959176Z" level=info msg="connecting to shim 67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750" address="unix:///run/containerd/s/a18df1a94c373076f12a67cc8ced6ed8d6f5343407357d2d443efda985016b19" protocol=ttrpc version=3 Mar 13 00:42:22.903651 systemd[1]: Started cri-containerd-67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750.scope - libcontainer container 67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750. Mar 13 00:42:23.095382 containerd[1987]: time="2026-03-13T00:42:23.094683665Z" level=info msg="StartContainer for \"67cebb52a46805fec94e5f15c857f175f12e0459c5e2f2126dbeefd89961f750\" returns successfully" Mar 13 00:42:23.252867 kubelet[3329]: I0313 00:42:23.214359 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-52fsp" podStartSLOduration=44.090316593 podStartE2EDuration="1m0.188053556s" podCreationTimestamp="2026-03-13 00:41:23 +0000 UTC" firstStartedPulling="2026-03-13 00:42:04.919188 +0000 UTC m=+59.455433484" lastFinishedPulling="2026-03-13 00:42:21.016924973 +0000 UTC m=+75.553170447" observedRunningTime="2026-03-13 00:42:23.137059385 +0000 UTC m=+77.673304874" watchObservedRunningTime="2026-03-13 00:42:23.188053556 +0000 UTC m=+77.724299049" Mar 13 00:42:23.736697 kubelet[3329]: I0313 00:42:23.736592 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-s9l6b" podStartSLOduration=37.14624587 podStartE2EDuration="58.736571899s" podCreationTimestamp="2026-03-13 00:41:25 +0000 UTC" firstStartedPulling="2026-03-13 00:42:01.094275239 +0000 UTC m=+55.630520714" lastFinishedPulling="2026-03-13 00:42:22.684601259 +0000 UTC m=+77.220846743" observedRunningTime="2026-03-13 00:42:23.735365007 +0000 UTC m=+78.271610500" watchObservedRunningTime="2026-03-13 00:42:23.736571899 +0000 UTC m=+78.272817395" Mar 13 00:42:23.982001 kubelet[3329]: I0313 00:42:23.976582 3329 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:42:23.996754 kubelet[3329]: I0313 00:42:23.996630 3329 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:42:25.080592 systemd[1]: Started sshd@12-172.31.30.203:22-20.161.92.111:38358.service - OpenSSH per-connection server daemon (20.161.92.111:38358). Mar 13 00:42:25.639371 sshd[6252]: Accepted publickey for core from 20.161.92.111 port 38358 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:25.643712 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:25.653287 systemd-logind[1966]: New session 13 of user core. Mar 13 00:42:25.659552 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:42:26.604048 sshd[6259]: Connection closed by 20.161.92.111 port 38358 Mar 13 00:42:26.605078 sshd-session[6252]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:26.609627 systemd[1]: sshd@12-172.31.30.203:22-20.161.92.111:38358.service: Deactivated successfully. Mar 13 00:42:26.612414 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:42:26.614068 systemd-logind[1966]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:42:26.617266 systemd-logind[1966]: Removed session 13. Mar 13 00:42:26.705112 systemd[1]: Started sshd@13-172.31.30.203:22-20.161.92.111:38360.service - OpenSSH per-connection server daemon (20.161.92.111:38360). Mar 13 00:42:27.178394 sshd[6272]: Accepted publickey for core from 20.161.92.111 port 38360 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:27.179369 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:27.184517 systemd-logind[1966]: New session 14 of user core. Mar 13 00:42:27.196617 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:42:28.722630 sshd[6275]: Connection closed by 20.161.92.111 port 38360 Mar 13 00:42:28.723925 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:28.730210 systemd[1]: sshd@13-172.31.30.203:22-20.161.92.111:38360.service: Deactivated successfully. Mar 13 00:42:28.730217 systemd-logind[1966]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:42:28.733309 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:42:28.736045 systemd-logind[1966]: Removed session 14. Mar 13 00:42:28.829569 systemd[1]: Started sshd@14-172.31.30.203:22-20.161.92.111:38362.service - OpenSSH per-connection server daemon (20.161.92.111:38362). Mar 13 00:42:29.350310 sshd[6285]: Accepted publickey for core from 20.161.92.111 port 38362 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:29.352027 sshd-session[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:29.358057 systemd-logind[1966]: New session 15 of user core. Mar 13 00:42:29.371611 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:42:30.901798 sshd[6288]: Connection closed by 20.161.92.111 port 38362 Mar 13 00:42:30.903051 sshd-session[6285]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:30.909687 systemd-logind[1966]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:42:30.911305 systemd[1]: sshd@14-172.31.30.203:22-20.161.92.111:38362.service: Deactivated successfully. Mar 13 00:42:30.914255 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:42:30.917102 systemd-logind[1966]: Removed session 15. Mar 13 00:42:30.999086 systemd[1]: Started sshd@15-172.31.30.203:22-20.161.92.111:51698.service - OpenSSH per-connection server daemon (20.161.92.111:51698). Mar 13 00:42:31.522226 sshd[6311]: Accepted publickey for core from 20.161.92.111 port 51698 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:31.523767 sshd-session[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:31.538663 systemd-logind[1966]: New session 16 of user core. Mar 13 00:42:31.544578 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:42:32.659282 sshd[6315]: Connection closed by 20.161.92.111 port 51698 Mar 13 00:42:32.665521 sshd-session[6311]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:32.680749 systemd[1]: sshd@15-172.31.30.203:22-20.161.92.111:51698.service: Deactivated successfully. Mar 13 00:42:32.683650 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:42:32.685707 systemd-logind[1966]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:42:32.687545 systemd-logind[1966]: Removed session 16. Mar 13 00:42:32.742148 systemd[1]: Started sshd@16-172.31.30.203:22-20.161.92.111:51708.service - OpenSSH per-connection server daemon (20.161.92.111:51708). Mar 13 00:42:33.264791 sshd[6327]: Accepted publickey for core from 20.161.92.111 port 51708 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:33.266859 sshd-session[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:33.273444 systemd-logind[1966]: New session 17 of user core. Mar 13 00:42:33.279720 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:42:33.650446 sshd[6330]: Connection closed by 20.161.92.111 port 51708 Mar 13 00:42:33.652485 sshd-session[6327]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:33.656792 systemd[1]: sshd@16-172.31.30.203:22-20.161.92.111:51708.service: Deactivated successfully. Mar 13 00:42:33.659801 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:42:33.661322 systemd-logind[1966]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:42:33.663946 systemd-logind[1966]: Removed session 17. Mar 13 00:42:38.753800 systemd[1]: Started sshd@17-172.31.30.203:22-20.161.92.111:51710.service - OpenSSH per-connection server daemon (20.161.92.111:51710). Mar 13 00:42:39.277505 sshd[6356]: Accepted publickey for core from 20.161.92.111 port 51710 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:39.278918 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:39.287375 systemd-logind[1966]: New session 18 of user core. Mar 13 00:42:39.290693 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:42:39.986826 sshd[6359]: Connection closed by 20.161.92.111 port 51710 Mar 13 00:42:39.988822 sshd-session[6356]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:39.993321 systemd[1]: sshd@17-172.31.30.203:22-20.161.92.111:51710.service: Deactivated successfully. Mar 13 00:42:39.995965 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:42:39.997774 systemd-logind[1966]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:42:39.999774 systemd-logind[1966]: Removed session 18. Mar 13 00:42:45.104498 systemd[1]: Started sshd@18-172.31.30.203:22-20.161.92.111:44804.service - OpenSSH per-connection server daemon (20.161.92.111:44804). Mar 13 00:42:45.666447 sshd[6402]: Accepted publickey for core from 20.161.92.111 port 44804 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:45.668721 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:45.676956 systemd-logind[1966]: New session 19 of user core. Mar 13 00:42:45.689796 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:42:46.646332 sshd[6405]: Connection closed by 20.161.92.111 port 44804 Mar 13 00:42:46.648457 sshd-session[6402]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:46.653217 systemd-logind[1966]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:42:46.654430 systemd[1]: sshd@18-172.31.30.203:22-20.161.92.111:44804.service: Deactivated successfully. Mar 13 00:42:46.657000 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:42:46.659235 systemd-logind[1966]: Removed session 19. Mar 13 00:42:51.737317 systemd[1]: Started sshd@19-172.31.30.203:22-20.161.92.111:38252.service - OpenSSH per-connection server daemon (20.161.92.111:38252). Mar 13 00:42:52.304288 sshd[6440]: Accepted publickey for core from 20.161.92.111 port 38252 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:52.308764 sshd-session[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:52.315924 systemd-logind[1966]: New session 20 of user core. Mar 13 00:42:52.323764 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:42:53.319480 sshd[6443]: Connection closed by 20.161.92.111 port 38252 Mar 13 00:42:53.320560 sshd-session[6440]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:53.327476 systemd-logind[1966]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:42:53.328329 systemd[1]: sshd@19-172.31.30.203:22-20.161.92.111:38252.service: Deactivated successfully. Mar 13 00:42:53.335267 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:42:53.342111 systemd-logind[1966]: Removed session 20. Mar 13 00:42:58.417797 systemd[1]: Started sshd@20-172.31.30.203:22-20.161.92.111:38264.service - OpenSSH per-connection server daemon (20.161.92.111:38264). Mar 13 00:42:58.869036 sshd[6482]: Accepted publickey for core from 20.161.92.111 port 38264 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:42:58.870573 sshd-session[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:58.876581 systemd-logind[1966]: New session 21 of user core. Mar 13 00:42:58.881751 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:42:59.238660 sshd[6485]: Connection closed by 20.161.92.111 port 38264 Mar 13 00:42:59.240107 sshd-session[6482]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:59.244507 systemd-logind[1966]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:42:59.245472 systemd[1]: sshd@20-172.31.30.203:22-20.161.92.111:38264.service: Deactivated successfully. Mar 13 00:42:59.248016 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:42:59.250681 systemd-logind[1966]: Removed session 21. Mar 13 00:43:04.328861 systemd[1]: Started sshd@21-172.31.30.203:22-20.161.92.111:40206.service - OpenSSH per-connection server daemon (20.161.92.111:40206). Mar 13 00:43:04.794288 sshd[6519]: Accepted publickey for core from 20.161.92.111 port 40206 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:43:04.796254 sshd-session[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:04.801410 systemd-logind[1966]: New session 22 of user core. Mar 13 00:43:04.808594 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:43:05.417155 sshd[6522]: Connection closed by 20.161.92.111 port 40206 Mar 13 00:43:05.419566 sshd-session[6519]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:05.423893 systemd[1]: sshd@21-172.31.30.203:22-20.161.92.111:40206.service: Deactivated successfully. Mar 13 00:43:05.426974 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:43:05.428105 systemd-logind[1966]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:43:05.430848 systemd-logind[1966]: Removed session 22. Mar 13 00:43:19.751786 systemd[1]: cri-containerd-9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1.scope: Deactivated successfully. Mar 13 00:43:19.752160 systemd[1]: cri-containerd-9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1.scope: Consumed 3.242s CPU time, 84.8M memory peak, 74.3M read from disk. Mar 13 00:43:19.850009 containerd[1987]: time="2026-03-13T00:43:19.844376522Z" level=info msg="received container exit event container_id:\"9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1\" id:\"9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1\" pid:3165 exit_status:1 exited_at:{seconds:1773362599 nanos:797499073}" Mar 13 00:43:19.963823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1-rootfs.mount: Deactivated successfully. Mar 13 00:43:20.070516 kubelet[3329]: I0313 00:43:20.070054 3329 scope.go:122] "RemoveContainer" containerID="9ee6033a06226fd87ef0067b2040d126e9f1b3695e982e56a71acd703eee70b1" Mar 13 00:43:20.189916 containerd[1987]: time="2026-03-13T00:43:20.189874107Z" level=info msg="CreateContainer within sandbox \"1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 13 00:43:20.239606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997138344.mount: Deactivated successfully. Mar 13 00:43:20.252180 containerd[1987]: time="2026-03-13T00:43:20.252113912Z" level=info msg="Container 877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:20.271077 containerd[1987]: time="2026-03-13T00:43:20.271015689Z" level=info msg="CreateContainer within sandbox \"1bf2797b8ca12e566c50aa2b3d982d7cffb9f6929b86d13b3ae1ca9787d480aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa\"" Mar 13 00:43:20.273495 containerd[1987]: time="2026-03-13T00:43:20.273459895Z" level=info msg="StartContainer for \"877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa\"" Mar 13 00:43:20.283664 containerd[1987]: time="2026-03-13T00:43:20.283591542Z" level=info msg="connecting to shim 877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa" address="unix:///run/containerd/s/fd890ef73dc30ee0c35ecee6b34b52f8e2c8fbc00621a19b9ea2c9ed72cbe5a7" protocol=ttrpc version=3 Mar 13 00:43:20.311532 systemd[1]: Started cri-containerd-877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa.scope - libcontainer container 877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa. Mar 13 00:43:20.437971 containerd[1987]: time="2026-03-13T00:43:20.437929951Z" level=info msg="StartContainer for \"877bb26f942cb9401111bcb33c2800d85e01e6ccc0320bf3369c3411410abefa\" returns successfully" Mar 13 00:43:21.119489 systemd[1]: cri-containerd-3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236.scope: Deactivated successfully. Mar 13 00:43:21.119834 systemd[1]: cri-containerd-3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236.scope: Consumed 8.511s CPU time, 128.3M memory peak, 53.1M read from disk. Mar 13 00:43:21.124667 containerd[1987]: time="2026-03-13T00:43:21.124618077Z" level=info msg="received container exit event container_id:\"3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236\" id:\"3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236\" pid:3658 exit_status:1 exited_at:{seconds:1773362601 nanos:124177475}" Mar 13 00:43:21.152147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236-rootfs.mount: Deactivated successfully. Mar 13 00:43:22.060954 kubelet[3329]: I0313 00:43:22.060911 3329 scope.go:122] "RemoveContainer" containerID="3c0a24c994a18d82ee91f9f3566b3c739d97e93e36f2b9490d67e29bddfdf236" Mar 13 00:43:22.068284 containerd[1987]: time="2026-03-13T00:43:22.068233962Z" level=info msg="CreateContainer within sandbox \"72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 13 00:43:22.093275 containerd[1987]: time="2026-03-13T00:43:22.092498297Z" level=info msg="Container fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:22.093794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499713787.mount: Deactivated successfully. Mar 13 00:43:22.109456 containerd[1987]: time="2026-03-13T00:43:22.109415647Z" level=info msg="CreateContainer within sandbox \"72edf82ed767efc1861a9698a1a41d0a2d6c43b96bd3ba63528f5edfe8e792e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457\"" Mar 13 00:43:22.110237 containerd[1987]: time="2026-03-13T00:43:22.110203934Z" level=info msg="StartContainer for \"fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457\"" Mar 13 00:43:22.111111 containerd[1987]: time="2026-03-13T00:43:22.111073349Z" level=info msg="connecting to shim fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457" address="unix:///run/containerd/s/231e04bea9251a8928fb54f2b0b96a145a62e2cccfd7c0a4357f51b9cb89944e" protocol=ttrpc version=3 Mar 13 00:43:22.135563 systemd[1]: Started cri-containerd-fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457.scope - libcontainer container fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457. Mar 13 00:43:22.175254 containerd[1987]: time="2026-03-13T00:43:22.175211381Z" level=info msg="StartContainer for \"fe1e21fdc306f1f5799b162f8b31b8e4d420687a81ca4a506d8ed0c887d0a457\" returns successfully" Mar 13 00:43:24.942953 systemd[1]: cri-containerd-d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e.scope: Deactivated successfully. Mar 13 00:43:24.943297 systemd[1]: cri-containerd-d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e.scope: Consumed 1.546s CPU time, 35.7M memory peak, 41.5M read from disk. Mar 13 00:43:24.950552 containerd[1987]: time="2026-03-13T00:43:24.950488251Z" level=info msg="received container exit event container_id:\"d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e\" id:\"d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e\" pid:3155 exit_status:1 exited_at:{seconds:1773362604 nanos:949174454}" Mar 13 00:43:24.983232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e-rootfs.mount: Deactivated successfully. Mar 13 00:43:25.084015 kubelet[3329]: I0313 00:43:25.083982 3329 scope.go:122] "RemoveContainer" containerID="d9c524160d6cee8e9453f4d505f11ba3f82379651fd92821528cf81f090ec74e" Mar 13 00:43:25.099356 containerd[1987]: time="2026-03-13T00:43:25.099312243Z" level=info msg="CreateContainer within sandbox \"24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 13 00:43:25.114362 containerd[1987]: time="2026-03-13T00:43:25.111710365Z" level=info msg="Container 9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:25.123611 containerd[1987]: time="2026-03-13T00:43:25.123487982Z" level=info msg="CreateContainer within sandbox \"24784a8a4b4a760113fbc935b80a97120ec0695ecaf8abc03e8c74b9922191a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d\"" Mar 13 00:43:25.124170 containerd[1987]: time="2026-03-13T00:43:25.124140243Z" level=info msg="StartContainer for \"9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d\"" Mar 13 00:43:25.125468 containerd[1987]: time="2026-03-13T00:43:25.125434865Z" level=info msg="connecting to shim 9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d" address="unix:///run/containerd/s/2c1d23340290841893d3903d4b87dd9b13d0b8ec10c309e66300616961b76ae7" protocol=ttrpc version=3 Mar 13 00:43:25.155607 systemd[1]: Started cri-containerd-9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d.scope - libcontainer container 9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d. Mar 13 00:43:25.218934 containerd[1987]: time="2026-03-13T00:43:25.218831832Z" level=info msg="StartContainer for \"9b56b7ae048b34d8b3fbc10f25abbf972360081ece0b476191ef57a8a48dda0d\" returns successfully" Mar 13 00:43:28.439605 kubelet[3329]: E0313 00:43:28.434230 3329 controller.go:251] "Failed to update lease" err="Put \"https://172.31.30.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-203?timeout=10s\": context deadline exceeded"