Sep 12 17:41:50.901879 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:41:50.901914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:41:50.901929 kernel: BIOS-provided physical RAM map: Sep 12 17:41:50.901941 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:41:50.901952 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:41:50.901964 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:41:50.901978 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:41:50.901991 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:41:50.902006 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:41:50.902018 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:41:50.902031 kernel: NX (Execute Disable) protection: active Sep 12 17:41:50.902043 kernel: APIC: Static calls initialized Sep 12 17:41:50.902056 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 17:41:50.902069 kernel: extended physical RAM map: Sep 12 17:41:50.902088 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:41:50.902101 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 17:41:50.902113 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 17:41:50.902125 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 17:41:50.902137 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:41:50.902150 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:41:50.902164 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:41:50.902178 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:41:50.902192 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:41:50.902206 kernel: efi: EFI v2.7 by EDK II Sep 12 17:41:50.902223 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:41:50.902236 kernel: secureboot: Secure boot disabled Sep 12 17:41:50.902250 kernel: SMBIOS 2.7 present. Sep 12 17:41:50.902261 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:41:50.902273 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:41:50.902286 kernel: Hypervisor detected: KVM Sep 12 17:41:50.902300 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:41:50.902312 kernel: kvm-clock: using sched offset of 5152754112 cycles Sep 12 17:41:50.902325 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:41:50.902340 kernel: tsc: Detected 2499.996 MHz processor Sep 12 17:41:50.902353 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:41:50.902370 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:41:50.902383 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:41:50.902396 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:41:50.902410 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:41:50.902424 kernel: Using GB pages for direct mapping Sep 12 17:41:50.902443 kernel: ACPI: Early table checksum verification disabled Sep 12 17:41:50.902460 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:41:50.902473 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:41:50.902489 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:41:50.902524 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:41:50.902546 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:41:50.902559 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:41:50.902570 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:41:50.902582 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:41:50.902598 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:41:50.902610 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:41:50.902624 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:41:50.902639 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:41:50.902653 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:41:50.902668 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:41:50.902683 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:41:50.902697 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:41:50.902716 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:41:50.902730 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:41:50.902746 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:41:50.902761 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:41:50.902776 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:41:50.902791 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:41:50.902806 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:41:50.902821 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:41:50.902836 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:41:50.902851 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:41:50.902870 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 12 17:41:50.902886 kernel: Zone ranges: Sep 12 17:41:50.902901 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:41:50.902916 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:41:50.902930 kernel: Normal empty Sep 12 17:41:50.902946 kernel: Device empty Sep 12 17:41:50.902961 kernel: Movable zone start for each node Sep 12 17:41:50.902975 kernel: Early memory node ranges Sep 12 17:41:50.902990 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:41:50.903010 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:41:50.903025 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:41:50.903039 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:41:50.903054 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:41:50.903069 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:41:50.903084 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:41:50.903100 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:41:50.903115 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:41:50.903130 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:41:50.903145 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:41:50.903164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:41:50.903179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:41:50.903194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:41:50.903210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:41:50.903225 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:41:50.903240 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:41:50.903256 kernel: TSC deadline timer available Sep 12 17:41:50.903272 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:41:50.903287 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:41:50.903306 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:41:50.903321 kernel: CPU topo: Max. threads per core: 2 Sep 12 17:41:50.903337 kernel: CPU topo: Num. cores per package: 1 Sep 12 17:41:50.903352 kernel: CPU topo: Num. threads per package: 2 Sep 12 17:41:50.903366 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 17:41:50.903382 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:41:50.903397 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:41:50.903412 kernel: Booting paravirtualized kernel on KVM Sep 12 17:41:50.903428 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:41:50.903446 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:41:50.903460 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 17:41:50.903475 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 17:41:50.903489 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:41:50.903526 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:41:50.903538 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:41:50.903553 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:41:50.903566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:41:50.903583 kernel: random: crng init done Sep 12 17:41:50.903597 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:41:50.903610 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:41:50.903626 kernel: Fallback order for Node 0: 0 Sep 12 17:41:50.903640 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 12 17:41:50.903653 kernel: Policy zone: DMA32 Sep 12 17:41:50.903678 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:41:50.903695 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:41:50.903709 kernel: Kernel/User page tables isolation: enabled Sep 12 17:41:50.903724 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:41:50.903738 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:41:50.903751 kernel: Dynamic Preempt: voluntary Sep 12 17:41:50.903767 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:41:50.903782 kernel: rcu: RCU event tracing is enabled. Sep 12 17:41:50.903797 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:41:50.903811 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:41:50.903826 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:41:50.903843 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:41:50.903856 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:41:50.903870 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:41:50.903882 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:41:50.903896 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:41:50.903908 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:41:50.903923 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:41:50.903935 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:41:50.903948 kernel: Console: colour dummy device 80x25 Sep 12 17:41:50.903966 kernel: printk: legacy console [tty0] enabled Sep 12 17:41:50.903982 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:41:50.903997 kernel: ACPI: Core revision 20240827 Sep 12 17:41:50.904010 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:41:50.904023 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:41:50.904036 kernel: x2apic enabled Sep 12 17:41:50.904050 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:41:50.904064 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 17:41:50.904078 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 12 17:41:50.904095 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:41:50.904111 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:41:50.904127 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:41:50.904143 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:41:50.904155 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:41:50.904168 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:41:50.904182 kernel: RETBleed: Vulnerable Sep 12 17:41:50.904195 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:41:50.904207 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:41:50.904223 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:41:50.904241 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:41:50.904255 kernel: active return thunk: its_return_thunk Sep 12 17:41:50.904269 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:41:50.904282 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:41:50.904295 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:41:50.904310 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:41:50.904325 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:41:50.904340 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:41:50.904355 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:41:50.904368 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:41:50.904382 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:41:50.904401 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:41:50.904415 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:41:50.904429 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:41:50.904442 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:41:50.904457 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:41:50.904470 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:41:50.904485 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:41:50.904499 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:41:50.905276 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:41:50.905291 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:41:50.905306 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:41:50.905325 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:41:50.905339 kernel: landlock: Up and running. Sep 12 17:41:50.905354 kernel: SELinux: Initializing. Sep 12 17:41:50.905368 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:41:50.905383 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:41:50.905398 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:41:50.905413 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:41:50.905427 kernel: signal: max sigframe size: 3632 Sep 12 17:41:50.905442 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:41:50.905458 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:41:50.905472 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:41:50.905490 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:41:50.905518 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:41:50.905542 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:41:50.905557 kernel: .... node #0, CPUs: #1 Sep 12 17:41:50.905572 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:41:50.905590 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:41:50.905606 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:41:50.905622 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 12 17:41:50.905643 kernel: Memory: 1908060K/2037804K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 125188K reserved, 0K cma-reserved) Sep 12 17:41:50.905659 kernel: devtmpfs: initialized Sep 12 17:41:50.905676 kernel: x86/mm: Memory block size: 128MB Sep 12 17:41:50.905694 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:41:50.905710 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:41:50.905727 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:41:50.905744 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:41:50.905760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:41:50.905777 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:41:50.905797 kernel: audit: type=2000 audit(1757698908.146:1): state=initialized audit_enabled=0 res=1 Sep 12 17:41:50.905814 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:41:50.905831 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:41:50.905847 kernel: cpuidle: using governor menu Sep 12 17:41:50.905863 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:41:50.905879 kernel: dca service started, version 1.12.1 Sep 12 17:41:50.905896 kernel: PCI: Using configuration type 1 for base access Sep 12 17:41:50.905914 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:41:50.905930 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:41:50.905950 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:41:50.905966 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:41:50.905980 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:41:50.905996 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:41:50.906012 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:41:50.906026 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:41:50.906042 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:41:50.906055 kernel: ACPI: Interpreter enabled Sep 12 17:41:50.906069 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:41:50.906088 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:41:50.906102 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:41:50.906117 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:41:50.906132 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:41:50.906149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:41:50.906377 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:41:50.906533 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:41:50.906680 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:41:50.906698 kernel: acpiphp: Slot [3] registered Sep 12 17:41:50.906713 kernel: acpiphp: Slot [4] registered Sep 12 17:41:50.906728 kernel: acpiphp: Slot [5] registered Sep 12 17:41:50.906743 kernel: acpiphp: Slot [6] registered Sep 12 17:41:50.906757 kernel: acpiphp: Slot [7] registered Sep 12 17:41:50.906772 kernel: acpiphp: Slot [8] registered Sep 12 17:41:50.906787 kernel: acpiphp: Slot [9] registered Sep 12 17:41:50.906801 kernel: acpiphp: Slot [10] registered Sep 12 17:41:50.906816 kernel: acpiphp: Slot [11] registered Sep 12 17:41:50.906834 kernel: acpiphp: Slot [12] registered Sep 12 17:41:50.906848 kernel: acpiphp: Slot [13] registered Sep 12 17:41:50.906862 kernel: acpiphp: Slot [14] registered Sep 12 17:41:50.906877 kernel: acpiphp: Slot [15] registered Sep 12 17:41:50.906892 kernel: acpiphp: Slot [16] registered Sep 12 17:41:50.906906 kernel: acpiphp: Slot [17] registered Sep 12 17:41:50.906921 kernel: acpiphp: Slot [18] registered Sep 12 17:41:50.906935 kernel: acpiphp: Slot [19] registered Sep 12 17:41:50.906950 kernel: acpiphp: Slot [20] registered Sep 12 17:41:50.906967 kernel: acpiphp: Slot [21] registered Sep 12 17:41:50.906982 kernel: acpiphp: Slot [22] registered Sep 12 17:41:50.906997 kernel: acpiphp: Slot [23] registered Sep 12 17:41:50.907012 kernel: acpiphp: Slot [24] registered Sep 12 17:41:50.907027 kernel: acpiphp: Slot [25] registered Sep 12 17:41:50.907042 kernel: acpiphp: Slot [26] registered Sep 12 17:41:50.907056 kernel: acpiphp: Slot [27] registered Sep 12 17:41:50.907071 kernel: acpiphp: Slot [28] registered Sep 12 17:41:50.907086 kernel: acpiphp: Slot [29] registered Sep 12 17:41:50.907104 kernel: acpiphp: Slot [30] registered Sep 12 17:41:50.907119 kernel: acpiphp: Slot [31] registered Sep 12 17:41:50.907134 kernel: PCI host bridge to bus 0000:00 Sep 12 17:41:50.907272 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:41:50.907397 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:41:50.907548 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:41:50.907666 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:41:50.907795 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:41:50.907925 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:41:50.908096 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:41:50.908249 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 17:41:50.908397 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 12 17:41:50.908607 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:41:50.908747 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:41:50.908885 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:41:50.909018 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:41:50.909145 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:41:50.909270 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:41:50.909398 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:41:50.909554 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 17:41:50.909683 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 12 17:41:50.909817 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 17:41:50.909946 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:41:50.910086 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 12 17:41:50.910214 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 12 17:41:50.910347 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 12 17:41:50.910482 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 12 17:41:50.910524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:41:50.910539 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:41:50.910560 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:41:50.910574 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:41:50.910591 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:41:50.910606 kernel: iommu: Default domain type: Translated Sep 12 17:41:50.910622 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:41:50.910637 kernel: efivars: Registered efivars operations Sep 12 17:41:50.910656 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:41:50.910674 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:41:50.910688 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 17:41:50.910702 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:41:50.910716 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:41:50.912570 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:41:50.912739 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:41:50.912884 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:41:50.912906 kernel: vgaarb: loaded Sep 12 17:41:50.912922 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:41:50.912942 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:41:50.912958 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:41:50.912972 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:41:50.912988 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:41:50.913004 kernel: pnp: PnP ACPI init Sep 12 17:41:50.913020 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:41:50.913035 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:41:50.913050 kernel: NET: Registered PF_INET protocol family Sep 12 17:41:50.913066 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:41:50.913085 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:41:50.913100 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:41:50.913116 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:41:50.913131 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:41:50.913146 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:41:50.913161 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:41:50.913176 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:41:50.913191 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:41:50.913208 kernel: NET: Registered PF_XDP protocol family Sep 12 17:41:50.913347 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:41:50.913472 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:41:50.913617 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:41:50.913738 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:41:50.913859 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:41:50.913999 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:41:50.914021 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:41:50.914042 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:41:50.914059 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 17:41:50.914075 kernel: clocksource: Switched to clocksource tsc Sep 12 17:41:50.914090 kernel: Initialise system trusted keyrings Sep 12 17:41:50.914104 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:41:50.914119 kernel: Key type asymmetric registered Sep 12 17:41:50.914133 kernel: Asymmetric key parser 'x509' registered Sep 12 17:41:50.914149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:41:50.914165 kernel: io scheduler mq-deadline registered Sep 12 17:41:50.914184 kernel: io scheduler kyber registered Sep 12 17:41:50.914200 kernel: io scheduler bfq registered Sep 12 17:41:50.914216 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:41:50.914232 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:41:50.914248 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:41:50.914264 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:41:50.914280 kernel: i8042: Warning: Keylock active Sep 12 17:41:50.914296 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:41:50.914311 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:41:50.914455 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:41:50.915698 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:41:50.915833 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:41:50 UTC (1757698910) Sep 12 17:41:50.915957 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:41:50.915998 kernel: intel_pstate: CPU model not supported Sep 12 17:41:50.916016 kernel: efifb: probing for efifb Sep 12 17:41:50.916031 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 17:41:50.916046 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:41:50.916066 kernel: efifb: scrolling: redraw Sep 12 17:41:50.916081 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:41:50.916096 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:41:50.916111 kernel: fb0: EFI VGA frame buffer device Sep 12 17:41:50.916126 kernel: pstore: Using crash dump compression: deflate Sep 12 17:41:50.916141 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:41:50.916156 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:41:50.916170 kernel: Segment Routing with IPv6 Sep 12 17:41:50.916185 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:41:50.916203 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:41:50.916217 kernel: Key type dns_resolver registered Sep 12 17:41:50.916231 kernel: IPI shorthand broadcast: enabled Sep 12 17:41:50.916246 kernel: sched_clock: Marking stable (2745002224, 142762997)->(2980624968, -92859747) Sep 12 17:41:50.916260 kernel: registered taskstats version 1 Sep 12 17:41:50.916275 kernel: Loading compiled-in X.509 certificates Sep 12 17:41:50.916289 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:41:50.916304 kernel: Demotion targets for Node 0: null Sep 12 17:41:50.916318 kernel: Key type .fscrypt registered Sep 12 17:41:50.916335 kernel: Key type fscrypt-provisioning registered Sep 12 17:41:50.916349 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:41:50.916364 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:41:50.916378 kernel: ima: No architecture policies found Sep 12 17:41:50.916392 kernel: clk: Disabling unused clocks Sep 12 17:41:50.916407 kernel: Warning: unable to open an initial console. Sep 12 17:41:50.916421 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:41:50.916436 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:41:50.916451 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:41:50.916469 kernel: Run /init as init process Sep 12 17:41:50.916483 kernel: with arguments: Sep 12 17:41:50.916497 kernel: /init Sep 12 17:41:50.917547 kernel: with environment: Sep 12 17:41:50.917564 kernel: HOME=/ Sep 12 17:41:50.917581 kernel: TERM=linux Sep 12 17:41:50.917603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:41:50.917622 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:41:50.917645 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:41:50.917664 systemd[1]: Detected virtualization amazon. Sep 12 17:41:50.917681 systemd[1]: Detected architecture x86-64. Sep 12 17:41:50.917699 systemd[1]: Running in initrd. Sep 12 17:41:50.917716 systemd[1]: No hostname configured, using default hostname. Sep 12 17:41:50.917737 systemd[1]: Hostname set to . Sep 12 17:41:50.917754 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:41:50.917771 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:41:50.917789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:41:50.917806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:41:50.917825 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:41:50.917843 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:41:50.917861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:41:50.917883 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:41:50.917905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:41:50.917924 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:41:50.917941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:41:50.917959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:41:50.917977 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:41:50.917997 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:41:50.918015 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:41:50.918031 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:41:50.918047 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:41:50.918065 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:41:50.918086 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:41:50.918104 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:41:50.918122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:41:50.918139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:41:50.918160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:41:50.918178 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:41:50.918196 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:41:50.918214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:41:50.918231 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:41:50.918249 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:41:50.918267 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:41:50.918284 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:41:50.918304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:41:50.918322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:50.918371 systemd-journald[207]: Collecting audit messages is disabled. Sep 12 17:41:50.918410 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:41:50.918433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:41:50.918451 systemd-journald[207]: Journal started Sep 12 17:41:50.918492 systemd-journald[207]: Runtime Journal (/run/log/journal/ec217ad10dbb4906a5a675ac00e6fd0f) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:41:50.923540 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:41:50.927582 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:41:50.935598 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:41:50.939721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:41:50.942720 systemd-modules-load[208]: Inserted module 'overlay' Sep 12 17:41:50.943844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:50.951142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:41:50.974581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:41:50.976440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:41:50.983733 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:41:50.992287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:41:51.001233 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:41:51.001305 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:41:51.004258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:51.008577 kernel: Bridge firewalling registered Sep 12 17:41:51.005552 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 12 17:41:51.007836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:41:51.010669 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:41:51.014760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:41:51.019948 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:41:51.032334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:41:51.037140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:41:51.042372 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:41:51.096118 systemd-resolved[255]: Positive Trust Anchors: Sep 12 17:41:51.097068 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:41:51.097132 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:41:51.104332 systemd-resolved[255]: Defaulting to hostname 'linux'. Sep 12 17:41:51.108296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:41:51.109021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:41:51.144540 kernel: SCSI subsystem initialized Sep 12 17:41:51.155546 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:41:51.166533 kernel: iscsi: registered transport (tcp) Sep 12 17:41:51.187711 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:41:51.187788 kernel: QLogic iSCSI HBA Driver Sep 12 17:41:51.207105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:41:51.230237 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:41:51.233340 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:41:51.279546 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:41:51.281618 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:41:51.336565 kernel: raid6: avx512x4 gen() 17541 MB/s Sep 12 17:41:51.354537 kernel: raid6: avx512x2 gen() 17972 MB/s Sep 12 17:41:51.372533 kernel: raid6: avx512x1 gen() 18002 MB/s Sep 12 17:41:51.390588 kernel: raid6: avx2x4 gen() 17853 MB/s Sep 12 17:41:51.408533 kernel: raid6: avx2x2 gen() 17824 MB/s Sep 12 17:41:51.426807 kernel: raid6: avx2x1 gen() 13737 MB/s Sep 12 17:41:51.426873 kernel: raid6: using algorithm avx512x1 gen() 18002 MB/s Sep 12 17:41:51.445788 kernel: raid6: .... xor() 19688 MB/s, rmw enabled Sep 12 17:41:51.445882 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:41:51.468547 kernel: xor: automatically using best checksumming function avx Sep 12 17:41:51.645566 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:41:51.652763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:41:51.655252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:41:51.690524 systemd-udevd[457]: Using default interface naming scheme 'v255'. Sep 12 17:41:51.697941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:41:51.702172 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:41:51.736124 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 12 17:41:51.769277 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:41:51.771430 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:41:51.841039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:51.843775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:41:51.925804 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:41:51.926086 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:41:51.946587 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:41:51.950592 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:41:51.955532 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 17:41:51.967642 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:fe:36:95:46:57 Sep 12 17:41:51.967948 kernel: AES CTR mode by8 optimization enabled Sep 12 17:41:51.975076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:51.981037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:51.983047 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:51.987840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:51.992364 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:41:52.013234 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:41:52.011721 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:41:52.018995 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:41:52.034532 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:41:52.036658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:52.037378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:52.039523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:52.045365 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:41:52.045435 kernel: GPT:9289727 != 16777215 Sep 12 17:41:52.045462 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:41:52.046967 kernel: GPT:9289727 != 16777215 Sep 12 17:41:52.047018 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:41:52.047038 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:41:52.075931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:52.087549 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:41:52.235350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:41:52.249388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:41:52.250445 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:41:52.270885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:41:52.280370 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:41:52.281098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:41:52.283519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:41:52.284778 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:52.285968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:41:52.287846 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:41:52.290714 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:41:52.313818 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:41:52.318268 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:41:52.318303 disk-uuid[692]: Primary Header is updated. Sep 12 17:41:52.318303 disk-uuid[692]: Secondary Entries is updated. Sep 12 17:41:52.318303 disk-uuid[692]: Secondary Header is updated. Sep 12 17:41:53.336770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:41:53.336845 disk-uuid[698]: The operation has completed successfully. Sep 12 17:41:53.454656 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:41:53.454927 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:41:53.486852 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:41:53.507406 sh[960]: Success Sep 12 17:41:53.527724 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:41:53.527802 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:41:53.530525 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:41:53.541561 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 17:41:53.642175 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:41:53.645414 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:41:53.655241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:41:53.685542 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (983) Sep 12 17:41:53.685604 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:41:53.689211 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:53.807264 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:41:53.807348 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:41:53.807362 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:41:53.830255 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:41:53.831328 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:41:53.831861 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:41:53.832914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:41:53.834275 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:41:53.867540 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Sep 12 17:41:53.872556 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:41:53.872622 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:53.891295 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:41:53.891382 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:41:53.898559 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:41:53.900190 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:41:53.903781 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:41:53.936597 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:41:53.939747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:41:53.997726 systemd-networkd[1152]: lo: Link UP Sep 12 17:41:53.997737 systemd-networkd[1152]: lo: Gained carrier Sep 12 17:41:54.001715 systemd-networkd[1152]: Enumeration completed Sep 12 17:41:54.003075 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:54.003177 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:41:54.004562 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:41:54.006865 systemd[1]: Reached target network.target - Network. Sep 12 17:41:54.008633 systemd-networkd[1152]: eth0: Link UP Sep 12 17:41:54.008640 systemd-networkd[1152]: eth0: Gained carrier Sep 12 17:41:54.008661 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:54.027990 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.19.53/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:41:54.368382 ignition[1111]: Ignition 2.21.0 Sep 12 17:41:54.368406 ignition[1111]: Stage: fetch-offline Sep 12 17:41:54.368664 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:54.368677 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:54.371014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:41:54.369069 ignition[1111]: Ignition finished successfully Sep 12 17:41:54.373329 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:41:54.403395 ignition[1163]: Ignition 2.21.0 Sep 12 17:41:54.403408 ignition[1163]: Stage: fetch Sep 12 17:41:54.403827 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:54.403841 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:54.403963 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:54.433977 ignition[1163]: PUT result: OK Sep 12 17:41:54.437003 ignition[1163]: parsed url from cmdline: "" Sep 12 17:41:54.437015 ignition[1163]: no config URL provided Sep 12 17:41:54.437078 ignition[1163]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:41:54.437092 ignition[1163]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:41:54.437121 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:54.438249 ignition[1163]: PUT result: OK Sep 12 17:41:54.438320 ignition[1163]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:41:54.439374 ignition[1163]: GET result: OK Sep 12 17:41:54.439531 ignition[1163]: parsing config with SHA512: 0f88df082a46753dafe6f32c7aee1738e9331e9b26f588cb2838c23c193457433259b1fee183fb8398dbf202b8ee42e57b8ea7d2634d8d01fae8bdfa6c84ec95 Sep 12 17:41:54.444789 unknown[1163]: fetched base config from "system" Sep 12 17:41:54.444802 unknown[1163]: fetched base config from "system" Sep 12 17:41:54.445300 ignition[1163]: fetch: fetch complete Sep 12 17:41:54.444809 unknown[1163]: fetched user config from "aws" Sep 12 17:41:54.445308 ignition[1163]: fetch: fetch passed Sep 12 17:41:54.445374 ignition[1163]: Ignition finished successfully Sep 12 17:41:54.448695 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:41:54.450202 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:41:54.491762 ignition[1170]: Ignition 2.21.0 Sep 12 17:41:54.491777 ignition[1170]: Stage: kargs Sep 12 17:41:54.492218 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:54.492230 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:54.492363 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:54.493367 ignition[1170]: PUT result: OK Sep 12 17:41:54.496883 ignition[1170]: kargs: kargs passed Sep 12 17:41:54.496970 ignition[1170]: Ignition finished successfully Sep 12 17:41:54.499346 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:41:54.500893 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:41:54.530705 ignition[1177]: Ignition 2.21.0 Sep 12 17:41:54.530720 ignition[1177]: Stage: disks Sep 12 17:41:54.531113 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:54.531125 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:54.531242 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:54.532252 ignition[1177]: PUT result: OK Sep 12 17:41:54.535319 ignition[1177]: disks: disks passed Sep 12 17:41:54.535398 ignition[1177]: Ignition finished successfully Sep 12 17:41:54.537683 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:41:54.538342 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:41:54.538898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:41:54.539453 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:41:54.540037 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:41:54.540631 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:41:54.542294 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:41:54.595238 systemd-fsck[1185]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:41:54.597998 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:41:54.599824 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:41:54.743563 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:41:54.744286 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:41:54.745426 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:41:54.747450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:41:54.749993 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:41:54.753581 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:41:54.754810 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:41:54.754847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:41:54.763076 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:41:54.765247 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:41:54.783552 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1204) Sep 12 17:41:54.789347 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:41:54.789421 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:54.798957 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:41:54.799042 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:41:54.802155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:41:55.100371 initrd-setup-root[1228]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:41:55.115228 initrd-setup-root[1235]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:41:55.120620 initrd-setup-root[1242]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:41:55.136571 initrd-setup-root[1249]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:41:55.433010 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:41:55.435315 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:41:55.438770 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:41:55.453056 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:41:55.455526 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:41:55.485301 ignition[1317]: INFO : Ignition 2.21.0 Sep 12 17:41:55.486268 ignition[1317]: INFO : Stage: mount Sep 12 17:41:55.487409 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:55.487409 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:55.487409 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:55.489079 ignition[1317]: INFO : PUT result: OK Sep 12 17:41:55.489801 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:41:55.492410 ignition[1317]: INFO : mount: mount passed Sep 12 17:41:55.492410 ignition[1317]: INFO : Ignition finished successfully Sep 12 17:41:55.494713 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:41:55.496208 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:41:55.745745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:41:55.775535 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1328) Sep 12 17:41:55.780203 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:41:55.780283 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:55.789025 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:41:55.789099 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:41:55.791330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:41:55.822761 ignition[1344]: INFO : Ignition 2.21.0 Sep 12 17:41:55.822761 ignition[1344]: INFO : Stage: files Sep 12 17:41:55.824269 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:55.824269 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:55.824269 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:55.827844 ignition[1344]: INFO : PUT result: OK Sep 12 17:41:55.828530 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:41:55.829771 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:41:55.829771 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:41:55.834226 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:41:55.835372 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:41:55.836186 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:41:55.835780 unknown[1344]: wrote ssh authorized keys file for user: core Sep 12 17:41:55.853525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:41:55.853525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:41:55.896639 systemd-networkd[1152]: eth0: Gained IPv6LL Sep 12 17:41:55.934051 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:41:56.663960 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:41:56.665226 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:41:56.672473 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:41:56.672473 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:41:56.672473 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:41:56.672473 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:41:56.672473 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:41:56.672473 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:41:57.165640 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:41:57.720399 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:41:57.720399 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:41:57.723003 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:41:57.727516 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:41:57.727516 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:41:57.727516 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:41:57.731694 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:41:57.731694 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:41:57.731694 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:41:57.731694 ignition[1344]: INFO : files: files passed Sep 12 17:41:57.731694 ignition[1344]: INFO : Ignition finished successfully Sep 12 17:41:57.729286 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:41:57.732243 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:41:57.738871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:41:57.747974 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:41:57.749701 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:41:57.769219 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:57.769219 initrd-setup-root-after-ignition[1375]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:57.772802 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:57.774123 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:41:57.775080 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:41:57.776721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:41:57.826366 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:41:57.826731 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:41:57.827968 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:41:57.829127 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:41:57.829983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:41:57.831382 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:41:57.857309 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:41:57.860016 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:41:57.883203 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:41:57.883996 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:57.885060 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:41:57.885895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:41:57.886132 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:41:57.887396 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:41:57.888273 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:41:57.889034 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:41:57.889817 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:41:57.890674 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:41:57.891454 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:41:57.892266 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:41:57.893037 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:41:57.893850 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:41:57.895113 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:41:57.895915 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:41:57.896648 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:41:57.896878 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:41:57.897905 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:41:57.898859 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:41:57.899532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:41:57.900242 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:41:57.900716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:41:57.900892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:41:57.902368 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:41:57.902720 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:41:57.903466 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:41:57.903690 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:41:57.905395 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:41:57.909680 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:41:57.912161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:41:57.912960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:57.913726 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:41:57.913896 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:41:57.924059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:41:57.924191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:41:57.941526 ignition[1399]: INFO : Ignition 2.21.0 Sep 12 17:41:57.943190 ignition[1399]: INFO : Stage: umount Sep 12 17:41:57.943190 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:57.943190 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:41:57.943190 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:41:57.945874 ignition[1399]: INFO : PUT result: OK Sep 12 17:41:57.949526 ignition[1399]: INFO : umount: umount passed Sep 12 17:41:57.949526 ignition[1399]: INFO : Ignition finished successfully Sep 12 17:41:57.953942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:41:57.954904 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:41:57.955034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:41:57.955933 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:41:57.955991 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:41:57.956557 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:41:57.956628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:41:57.957241 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:41:57.957302 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:41:57.957933 systemd[1]: Stopped target network.target - Network. Sep 12 17:41:57.958635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:41:57.958696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:41:57.959354 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:41:57.959986 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:41:57.960051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:41:57.960713 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:41:57.961321 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:41:57.962005 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:41:57.962055 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:41:57.962816 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:41:57.962866 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:41:57.963467 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:41:57.963558 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:41:57.964151 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:41:57.964207 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:41:57.964950 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:41:57.965909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:41:57.972795 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:41:57.972964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:41:57.976484 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:41:57.976795 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:41:57.976942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:41:57.979597 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:41:57.980181 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:41:57.980837 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:41:57.980889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:41:57.983133 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:41:57.984562 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:41:57.984631 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:41:57.987363 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:41:57.987442 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:41:57.989980 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:41:57.990030 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:41:57.990711 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:41:57.990781 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:41:57.991427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:41:57.995304 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:41:57.995402 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:41:58.004866 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:41:58.005071 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:41:58.007585 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:41:58.007660 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:41:58.009393 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:41:58.009444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:41:58.011571 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:41:58.011642 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:41:58.012849 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:41:58.012926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:41:58.014005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:41:58.014081 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:58.017280 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:41:58.017903 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:41:58.017980 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:41:58.021761 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:41:58.021838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:41:58.022860 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:41:58.022927 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:41:58.023462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:41:58.023541 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:41:58.025639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:58.025704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:58.030779 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:41:58.030875 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:41:58.030928 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:41:58.030988 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:41:58.033661 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:41:58.033820 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:41:58.037808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:41:58.037912 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:41:58.104552 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:41:58.104679 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:41:58.106113 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:41:58.106842 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:41:58.106922 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:41:58.108548 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:41:58.134486 systemd[1]: Switching root. Sep 12 17:41:58.179331 systemd-journald[207]: Journal stopped Sep 12 17:42:00.319823 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 12 17:42:00.319934 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:42:00.319963 kernel: SELinux: policy capability open_perms=1 Sep 12 17:42:00.319981 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:42:00.320004 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:42:00.320023 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:42:00.320047 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:42:00.320070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:42:00.320088 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:42:00.320106 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:42:00.320125 kernel: audit: type=1403 audit(1757698918.646:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:42:00.320144 systemd[1]: Successfully loaded SELinux policy in 97.939ms. Sep 12 17:42:00.320176 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.271ms. Sep 12 17:42:00.320202 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:42:00.320223 systemd[1]: Detected virtualization amazon. Sep 12 17:42:00.320242 systemd[1]: Detected architecture x86-64. Sep 12 17:42:00.320262 systemd[1]: Detected first boot. Sep 12 17:42:00.320282 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:42:00.320302 zram_generator::config[1443]: No configuration found. Sep 12 17:42:00.320323 kernel: Guest personality initialized and is inactive Sep 12 17:42:00.320342 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:42:00.320364 kernel: Initialized host personality Sep 12 17:42:00.320384 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:42:00.320402 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:42:00.320424 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:42:00.320445 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:42:00.320466 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:42:00.320487 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:42:00.322562 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:42:00.322619 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:42:00.322648 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:42:00.322668 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:42:00.322687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:42:00.322705 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:42:00.322723 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:42:00.322741 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:42:00.322760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:42:00.322785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:42:00.322804 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:42:00.322826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:42:00.322847 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:42:00.322867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:42:00.322886 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:42:00.322905 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:42:00.322924 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:42:00.322943 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:42:00.322964 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:42:00.322983 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:42:00.323001 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:42:00.323021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:42:00.323039 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:42:00.323057 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:42:00.323074 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:42:00.323092 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:42:00.323111 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:42:00.323133 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:42:00.323153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:42:00.323173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:42:00.323193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:42:00.323213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:42:00.323233 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:42:00.323253 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:42:00.323273 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:42:00.323293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:00.323316 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:42:00.323336 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:42:00.323358 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:42:00.323380 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:42:00.323400 systemd[1]: Reached target machines.target - Containers. Sep 12 17:42:00.323421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:42:00.323441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:42:00.323462 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:42:00.323482 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:42:00.333560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:42:00.333607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:42:00.333627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:42:00.333646 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:42:00.333665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:42:00.333686 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:42:00.333705 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:42:00.333724 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:42:00.333751 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:42:00.333770 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:42:00.333791 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:42:00.333810 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:42:00.333829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:42:00.333851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:42:00.333873 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:42:00.333890 kernel: loop: module loaded Sep 12 17:42:00.333910 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:42:00.333929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:42:00.333949 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:42:00.333970 systemd[1]: Stopped verity-setup.service. Sep 12 17:42:00.333990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:00.334014 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:42:00.334035 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:42:00.334056 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:42:00.334076 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:42:00.334095 kernel: fuse: init (API version 7.41) Sep 12 17:42:00.334115 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:42:00.334139 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:42:00.334158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:42:00.334178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:42:00.334199 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:42:00.334220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:42:00.334241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:42:00.334261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:42:00.334280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:42:00.334301 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:42:00.334325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:42:00.334345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:42:00.334366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:42:00.334387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:42:00.334407 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:42:00.334427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:42:00.334448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:42:00.334469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:42:00.334489 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:42:00.334551 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:42:00.334573 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:42:00.334593 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:42:00.334612 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:42:00.334635 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:42:00.334653 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:42:00.334671 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:42:00.334735 systemd-journald[1523]: Collecting audit messages is disabled. Sep 12 17:42:00.334773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:42:00.334793 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:42:00.334812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:42:00.334833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:42:00.335249 systemd-journald[1523]: Journal started Sep 12 17:42:00.335301 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec217ad10dbb4906a5a675ac00e6fd0f) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:41:59.892048 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:41:59.901390 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:41:59.901863 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:42:00.350727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:42:00.398610 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:42:00.398706 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:42:00.389369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:42:00.420880 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:42:00.423240 kernel: ACPI: bus type drm_connector registered Sep 12 17:42:00.427812 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:42:00.430366 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:42:00.430761 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:42:00.432004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:42:00.434322 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:42:00.439490 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:42:00.451294 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:42:00.452778 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Sep 12 17:42:00.452801 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Sep 12 17:42:00.456837 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec217ad10dbb4906a5a675ac00e6fd0f is 116.253ms for 1027 entries. Sep 12 17:42:00.456837 systemd-journald[1523]: System Journal (/var/log/journal/ec217ad10dbb4906a5a675ac00e6fd0f) is 8M, max 195.6M, 187.6M free. Sep 12 17:42:00.591991 systemd-journald[1523]: Received client request to flush runtime journal. Sep 12 17:42:00.592068 kernel: loop0: detected capacity change from 0 to 224512 Sep 12 17:42:00.592106 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:42:00.474560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:42:00.482440 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:42:00.489946 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:42:00.545620 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:42:00.593857 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:42:00.596816 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:42:00.601837 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:42:00.608689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:42:00.621527 kernel: loop1: detected capacity change from 0 to 128016 Sep 12 17:42:00.639395 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Sep 12 17:42:00.640170 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Sep 12 17:42:00.653752 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:42:00.778241 kernel: loop2: detected capacity change from 0 to 72360 Sep 12 17:42:00.918648 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:42:01.040333 kernel: loop3: detected capacity change from 0 to 111000 Sep 12 17:42:01.412241 kernel: loop4: detected capacity change from 0 to 224512 Sep 12 17:42:01.612938 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 17:42:01.754313 kernel: loop6: detected capacity change from 0 to 72360 Sep 12 17:42:01.901312 kernel: loop7: detected capacity change from 0 to 111000 Sep 12 17:42:02.123530 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:42:02.130188 (sd-merge)[1603]: Merged extensions into '/usr'. Sep 12 17:42:02.139010 systemd[1]: Reload requested from client PID 1557 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:42:02.139028 systemd[1]: Reloading... Sep 12 17:42:02.377715 zram_generator::config[1632]: No configuration found. Sep 12 17:42:02.973757 systemd[1]: Reloading finished in 833 ms. Sep 12 17:42:03.033788 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:42:03.073744 systemd[1]: Starting ensure-sysext.service... Sep 12 17:42:03.080818 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:42:03.147431 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:42:03.148498 systemd[1]: Reloading... Sep 12 17:42:03.177538 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:42:03.177610 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:42:03.178449 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:42:03.178902 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:42:03.187972 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:42:03.188941 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Sep 12 17:42:03.189045 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Sep 12 17:42:03.199297 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:42:03.202830 systemd-tmpfiles[1681]: Skipping /boot Sep 12 17:42:03.224678 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:42:03.224696 systemd-tmpfiles[1681]: Skipping /boot Sep 12 17:42:03.387567 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:42:03.396525 zram_generator::config[1712]: No configuration found. Sep 12 17:42:04.104475 systemd[1]: Reloading finished in 955 ms. Sep 12 17:42:04.139420 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:42:04.144789 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:42:04.214965 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:42:04.255405 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:42:04.261840 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:42:04.266753 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:42:04.273818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:42:04.279950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:42:04.297111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:42:04.318952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:04.320221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:42:04.336707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:42:04.354870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:42:04.363393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:42:04.368939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:42:04.369286 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:42:04.369447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:04.408472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:04.417034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:42:04.417312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:42:04.417458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:42:04.423768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:04.464311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:04.468011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:42:04.489639 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:42:04.494486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:42:04.495595 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:42:04.495854 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:42:04.501044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:42:04.520163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:42:04.526694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:42:04.540086 systemd[1]: Finished ensure-sysext.service. Sep 12 17:42:04.587241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:42:04.603348 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:42:04.611957 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:42:04.622910 systemd-udevd[1768]: Using default interface naming scheme 'v255'. Sep 12 17:42:04.638702 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:42:04.638998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:42:04.650379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:42:04.651064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:42:04.653405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:42:04.665901 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:42:04.666256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:42:04.699139 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:42:04.702083 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:42:04.727862 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:42:04.732352 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:42:04.734517 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:42:04.737108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:42:04.741058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:42:04.752978 augenrules[1810]: No rules Sep 12 17:42:04.752804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:42:04.758769 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:42:04.759104 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:42:04.874084 systemd-resolved[1767]: Positive Trust Anchors: Sep 12 17:42:04.874549 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:42:04.874622 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:42:04.887365 systemd-resolved[1767]: Defaulting to hostname 'linux'. Sep 12 17:42:04.892444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:42:04.893714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:42:04.895897 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:42:04.898790 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:42:04.900649 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:42:04.901276 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:42:04.902049 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:42:04.903723 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:42:04.904933 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:42:04.906867 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:42:04.906914 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:42:04.907584 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:42:04.910312 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:42:04.916944 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:42:04.925613 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:42:04.927812 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:42:04.929669 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:42:04.942654 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:42:04.943796 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:42:04.947291 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:42:04.954182 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:42:04.955387 systemd-networkd[1813]: lo: Link UP Sep 12 17:42:04.956622 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:42:04.956951 systemd-networkd[1813]: lo: Gained carrier Sep 12 17:42:04.957284 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:42:04.957322 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:42:04.958691 systemd-networkd[1813]: Enumeration completed Sep 12 17:42:04.960787 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:42:04.965784 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:42:04.970861 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:42:04.980794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:42:04.989813 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:42:04.991600 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:42:04.996810 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:42:05.006814 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:42:05.013823 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:42:05.020517 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:42:05.024696 extend-filesystems[1848]: Found /dev/nvme0n1p6 Sep 12 17:42:05.030612 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:42:05.044270 jq[1847]: false Sep 12 17:42:05.045349 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:42:05.050319 extend-filesystems[1848]: Found /dev/nvme0n1p9 Sep 12 17:42:05.051219 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:42:05.099070 extend-filesystems[1848]: Checking size of /dev/nvme0n1p9 Sep 12 17:42:05.091968 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:42:05.096223 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:42:05.097370 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:42:05.100786 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:42:05.114768 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:42:05.117852 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:42:05.120178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:42:05.122218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:42:05.122496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:42:05.129286 oslogin_cache_refresh[1849]: Refreshing passwd entry cache Sep 12 17:42:05.134757 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Refreshing passwd entry cache Sep 12 17:42:05.149423 systemd[1]: Reached target network.target - Network. Sep 12 17:42:05.150036 oslogin_cache_refresh[1849]: Failure getting users, quitting Sep 12 17:42:05.152654 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Failure getting users, quitting Sep 12 17:42:05.152654 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:42:05.152654 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Refreshing group entry cache Sep 12 17:42:05.150061 oslogin_cache_refresh[1849]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:42:05.150118 oslogin_cache_refresh[1849]: Refreshing group entry cache Sep 12 17:42:05.155974 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Failure getting groups, quitting Sep 12 17:42:05.155974 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:42:05.155183 oslogin_cache_refresh[1849]: Failure getting groups, quitting Sep 12 17:42:05.155201 oslogin_cache_refresh[1849]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:42:05.163890 jq[1869]: true Sep 12 17:42:05.159251 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:42:05.164909 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:42:05.174665 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:42:05.178218 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:42:05.178524 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:42:05.179588 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:42:05.179838 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:42:05.184527 extend-filesystems[1848]: Resized partition /dev/nvme0n1p9 Sep 12 17:42:05.197561 extend-filesystems[1889]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:42:05.207373 update_engine[1867]: I20250912 17:42:05.207281 1867 main.cc:92] Flatcar Update Engine starting Sep 12 17:42:05.222549 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:42:05.238166 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:42:05.244726 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:42:05.268163 jq[1885]: true Sep 12 17:42:05.288005 tar[1871]: linux-amd64/LICENSE Sep 12 17:42:05.288005 tar[1871]: linux-amd64/helm Sep 12 17:42:05.320535 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:42:05.328727 extend-filesystems[1889]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:42:05.328727 extend-filesystems[1889]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:42:05.328727 extend-filesystems[1889]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:42:05.348295 extend-filesystems[1848]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:42:05.338332 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:42:05.343082 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:42:05.345104 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:42:05.349688 dbus-daemon[1845]: [system] SELinux support is enabled Sep 12 17:42:05.349898 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:42:05.355786 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:42:05.355830 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:42:05.356888 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:42:05.356924 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:42:05.358696 (ntainerd)[1909]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:42:05.364574 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:42:05.380928 (udev-worker)[1836]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:42:05.397289 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:42:05.397811 update_engine[1867]: I20250912 17:42:05.397746 1867 update_check_scheduler.cc:74] Next update check in 9m28s Sep 12 17:42:05.406428 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:42:05.406440 systemd-networkd[1813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:42:05.411902 systemd-networkd[1813]: eth0: Link UP Sep 12 17:42:05.412209 systemd-networkd[1813]: eth0: Gained carrier Sep 12 17:42:05.412248 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:42:05.415468 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:42:05.435538 systemd-networkd[1813]: eth0: DHCPv4 address 172.31.19.53/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:42:05.436385 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:42:05.439876 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1813 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:42:05.449165 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:42:05.532486 bash[1937]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:42:05.532575 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:42:05.543802 systemd[1]: Starting sshkeys.service... Sep 12 17:42:05.555710 coreos-metadata[1844]: Sep 12 17:42:05.550 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:42:05.559532 coreos-metadata[1844]: Sep 12 17:42:05.558 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:42:05.571246 coreos-metadata[1844]: Sep 12 17:42:05.570 INFO Fetch successful Sep 12 17:42:05.571246 coreos-metadata[1844]: Sep 12 17:42:05.570 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:42:05.572959 coreos-metadata[1844]: Sep 12 17:42:05.572 INFO Fetch successful Sep 12 17:42:05.573110 coreos-metadata[1844]: Sep 12 17:42:05.573 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:42:05.573647 coreos-metadata[1844]: Sep 12 17:42:05.573 INFO Fetch successful Sep 12 17:42:05.573700 coreos-metadata[1844]: Sep 12 17:42:05.573 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:42:05.576355 coreos-metadata[1844]: Sep 12 17:42:05.576 INFO Fetch successful Sep 12 17:42:05.576477 coreos-metadata[1844]: Sep 12 17:42:05.576 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:42:05.577968 coreos-metadata[1844]: Sep 12 17:42:05.577 INFO Fetch failed with 404: resource not found Sep 12 17:42:05.579292 coreos-metadata[1844]: Sep 12 17:42:05.579 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:42:05.588818 coreos-metadata[1844]: Sep 12 17:42:05.588 INFO Fetch successful Sep 12 17:42:05.589034 coreos-metadata[1844]: Sep 12 17:42:05.589 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:42:05.590290 coreos-metadata[1844]: Sep 12 17:42:05.590 INFO Fetch successful Sep 12 17:42:05.590290 coreos-metadata[1844]: Sep 12 17:42:05.590 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:42:05.591812 coreos-metadata[1844]: Sep 12 17:42:05.591 INFO Fetch successful Sep 12 17:42:05.591933 coreos-metadata[1844]: Sep 12 17:42:05.591 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:42:05.594706 coreos-metadata[1844]: Sep 12 17:42:05.594 INFO Fetch successful Sep 12 17:42:05.594706 coreos-metadata[1844]: Sep 12 17:42:05.594 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:42:05.597228 coreos-metadata[1844]: Sep 12 17:42:05.597 INFO Fetch successful Sep 12 17:42:05.646074 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:42:05.651940 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: ---------------------------------------------------- Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: corporation. Support and training for ntp-4 are Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: available at https://www.nwtime.org/support Sep 12 17:42:05.656529 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: ---------------------------------------------------- Sep 12 17:42:05.651972 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:42:05.651984 ntpd[1851]: ---------------------------------------------------- Sep 12 17:42:05.651993 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:42:05.652002 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:42:05.652011 ntpd[1851]: corporation. Support and training for ntp-4 are Sep 12 17:42:05.652020 ntpd[1851]: available at https://www.nwtime.org/support Sep 12 17:42:05.652030 ntpd[1851]: ---------------------------------------------------- Sep 12 17:42:05.658293 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:42:05.686227 ntpd[1851]: proto: precision = 0.096 usec (-23) Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: proto: precision = 0.096 usec (-23) Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: basedate set to 2025-08-31 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: gps base set to 2025-08-31 (week 2382) Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Listen normally on 3 eth0 172.31.19.53:123 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Listen normally on 4 lo [::1]:123 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: bind(21) AF_INET6 fe80::4fe:36ff:fe95:4657%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: unable to create socket on eth0 (5) for fe80::4fe:36ff:fe95:4657%2#123 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: failed to init interface for address fe80::4fe:36ff:fe95:4657%2 Sep 12 17:42:05.695684 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: Listening on routing socket on fd #21 for interface updates Sep 12 17:42:05.687772 ntpd[1851]: basedate set to 2025-08-31 Sep 12 17:42:05.687795 ntpd[1851]: gps base set to 2025-08-31 (week 2382) Sep 12 17:42:05.691228 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:42:05.691283 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:42:05.691468 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:42:05.694591 ntpd[1851]: Listen normally on 3 eth0 172.31.19.53:123 Sep 12 17:42:05.694671 ntpd[1851]: Listen normally on 4 lo [::1]:123 Sep 12 17:42:05.694730 ntpd[1851]: bind(21) AF_INET6 fe80::4fe:36ff:fe95:4657%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:42:05.694751 ntpd[1851]: unable to create socket on eth0 (5) for fe80::4fe:36ff:fe95:4657%2#123 Sep 12 17:42:05.694765 ntpd[1851]: failed to init interface for address fe80::4fe:36ff:fe95:4657%2 Sep 12 17:42:05.694801 ntpd[1851]: Listening on routing socket on fd #21 for interface updates Sep 12 17:42:05.726922 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:42:05.730702 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:42:05.730702 ntpd[1851]: 12 Sep 17:42:05 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:42:05.726965 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:42:05.779540 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:42:05.779616 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:42:05.783534 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 12 17:42:05.784860 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:42:05.810530 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:42:05.815284 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:42:05.817422 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:42:05.852572 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:42:05.938735 locksmithd[1927]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:42:05.976738 systemd-logind[1861]: New seat seat0. Sep 12 17:42:05.980639 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:42:06.005296 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:42:06.036805 coreos-metadata[1954]: Sep 12 17:42:06.036 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:42:06.039909 coreos-metadata[1954]: Sep 12 17:42:06.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:42:06.040930 coreos-metadata[1954]: Sep 12 17:42:06.040 INFO Fetch successful Sep 12 17:42:06.040930 coreos-metadata[1954]: Sep 12 17:42:06.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:42:06.048263 coreos-metadata[1954]: Sep 12 17:42:06.046 INFO Fetch successful Sep 12 17:42:06.050702 unknown[1954]: wrote ssh authorized keys file for user: core Sep 12 17:42:06.115811 sshd_keygen[1894]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:42:06.131777 update-ssh-keys[2028]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:42:06.135369 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:42:06.141017 systemd[1]: Finished sshkeys.service. Sep 12 17:42:06.180988 containerd[1909]: time="2025-09-12T17:42:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:42:06.182182 containerd[1909]: time="2025-09-12T17:42:06.182135851Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:42:06.191740 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:42:06.196080 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:42:06.199912 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1936 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:42:06.208915 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:42:06.234753 containerd[1909]: time="2025-09-12T17:42:06.234700004Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.939µs" Sep 12 17:42:06.235791 containerd[1909]: time="2025-09-12T17:42:06.235750220Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:42:06.235912 containerd[1909]: time="2025-09-12T17:42:06.235895475Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:42:06.236173 containerd[1909]: time="2025-09-12T17:42:06.236149675Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:42:06.236259 containerd[1909]: time="2025-09-12T17:42:06.236243965Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:42:06.236343 containerd[1909]: time="2025-09-12T17:42:06.236329136Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:42:06.236499 containerd[1909]: time="2025-09-12T17:42:06.236477823Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:42:06.236607 containerd[1909]: time="2025-09-12T17:42:06.236588614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:42:06.237707 containerd[1909]: time="2025-09-12T17:42:06.237673161Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:42:06.237859 containerd[1909]: time="2025-09-12T17:42:06.237839018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:42:06.237946 containerd[1909]: time="2025-09-12T17:42:06.237929879Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:42:06.238036 containerd[1909]: time="2025-09-12T17:42:06.238020760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:42:06.238214 containerd[1909]: time="2025-09-12T17:42:06.238194829Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:42:06.239342 containerd[1909]: time="2025-09-12T17:42:06.239071380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:42:06.239342 containerd[1909]: time="2025-09-12T17:42:06.239119845Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:42:06.239342 containerd[1909]: time="2025-09-12T17:42:06.239136653Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:42:06.240307 containerd[1909]: time="2025-09-12T17:42:06.240269350Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:42:06.240864 containerd[1909]: time="2025-09-12T17:42:06.240839698Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:42:06.241032 containerd[1909]: time="2025-09-12T17:42:06.241013652Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.245832247Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.245926391Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.245947670Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246010118Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246030026Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246045548Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246065418Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246082663Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246100217Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246117784Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246132416Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246150051Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246302122Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:42:06.247549 containerd[1909]: time="2025-09-12T17:42:06.246328541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246357394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246371936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246388392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246402478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246417328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246431953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246446866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246461372Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.246475805Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.247078632Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:42:06.248115 containerd[1909]: time="2025-09-12T17:42:06.247102691Z" level=info msg="Start snapshots syncer" Sep 12 17:42:06.252178 containerd[1909]: time="2025-09-12T17:42:06.250578404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:42:06.252178 containerd[1909]: time="2025-09-12T17:42:06.251562776Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:42:06.252444 containerd[1909]: time="2025-09-12T17:42:06.251654034Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:42:06.258375 containerd[1909]: time="2025-09-12T17:42:06.258319831Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:42:06.259696 containerd[1909]: time="2025-09-12T17:42:06.259653522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:42:06.259850 containerd[1909]: time="2025-09-12T17:42:06.259833439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:42:06.259926 containerd[1909]: time="2025-09-12T17:42:06.259911359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:42:06.259994 containerd[1909]: time="2025-09-12T17:42:06.259981718Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:42:06.260057 containerd[1909]: time="2025-09-12T17:42:06.260043270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:42:06.260120 containerd[1909]: time="2025-09-12T17:42:06.260108361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:42:06.260200 containerd[1909]: time="2025-09-12T17:42:06.260187896Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:42:06.260290 containerd[1909]: time="2025-09-12T17:42:06.260275343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:42:06.260367 containerd[1909]: time="2025-09-12T17:42:06.260348763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:42:06.260434 containerd[1909]: time="2025-09-12T17:42:06.260422442Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:42:06.260607 containerd[1909]: time="2025-09-12T17:42:06.260567607Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260720251Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260743810Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260759911Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260771508Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260795584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260811891Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260833518Z" level=info msg="runtime interface created" Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260840778Z" level=info msg="created NRI interface" Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260853131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260876843Z" level=info msg="Connect containerd service" Sep 12 17:42:06.260961 containerd[1909]: time="2025-09-12T17:42:06.260920097Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:42:06.263010 containerd[1909]: time="2025-09-12T17:42:06.262632283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:42:06.312912 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:42:06.317450 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:42:06.322147 systemd[1]: Started sshd@0-172.31.19.53:22-139.178.68.195:40018.service - OpenSSH per-connection server daemon (139.178.68.195:40018). Sep 12 17:42:06.353022 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:42:06.392355 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:42:06.393705 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:42:06.414190 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:42:06.421101 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:42:06.499158 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:42:06.508942 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:42:06.516003 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:42:06.516937 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:42:06.551150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:42:06.573021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:42:06.625477 systemd-logind[1861]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:42:06.632649 systemd-logind[1861]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:42:06.652424 ntpd[1851]: bind(24) AF_INET6 fe80::4fe:36ff:fe95:4657%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:42:06.652909 ntpd[1851]: 12 Sep 17:42:06 ntpd[1851]: bind(24) AF_INET6 fe80::4fe:36ff:fe95:4657%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:42:06.652909 ntpd[1851]: 12 Sep 17:42:06 ntpd[1851]: unable to create socket on eth0 (6) for fe80::4fe:36ff:fe95:4657%2#123 Sep 12 17:42:06.652909 ntpd[1851]: 12 Sep 17:42:06 ntpd[1851]: failed to init interface for address fe80::4fe:36ff:fe95:4657%2 Sep 12 17:42:06.652464 ntpd[1851]: unable to create socket on eth0 (6) for fe80::4fe:36ff:fe95:4657%2#123 Sep 12 17:42:06.652480 ntpd[1851]: failed to init interface for address fe80::4fe:36ff:fe95:4657%2 Sep 12 17:42:06.655247 systemd-logind[1861]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:42:06.688993 containerd[1909]: time="2025-09-12T17:42:06.688901677Z" level=info msg="Start subscribing containerd event" Sep 12 17:42:06.689212 containerd[1909]: time="2025-09-12T17:42:06.689177086Z" level=info msg="Start recovering state" Sep 12 17:42:06.689596 containerd[1909]: time="2025-09-12T17:42:06.689572527Z" level=info msg="Start event monitor" Sep 12 17:42:06.689701 containerd[1909]: time="2025-09-12T17:42:06.689686839Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:42:06.689780 containerd[1909]: time="2025-09-12T17:42:06.689767972Z" level=info msg="Start streaming server" Sep 12 17:42:06.689852 containerd[1909]: time="2025-09-12T17:42:06.689840189Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:42:06.690058 containerd[1909]: time="2025-09-12T17:42:06.690035821Z" level=info msg="runtime interface starting up..." Sep 12 17:42:06.690137 containerd[1909]: time="2025-09-12T17:42:06.690124951Z" level=info msg="starting plugins..." Sep 12 17:42:06.690212 containerd[1909]: time="2025-09-12T17:42:06.690199701Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:42:06.692087 containerd[1909]: time="2025-09-12T17:42:06.691928696Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:42:06.692301 containerd[1909]: time="2025-09-12T17:42:06.692280240Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:42:06.693362 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:42:06.695612 containerd[1909]: time="2025-09-12T17:42:06.695119427Z" level=info msg="containerd successfully booted in 0.515378s" Sep 12 17:42:06.799584 sshd[2069]: Accepted publickey for core from 139.178.68.195 port 40018 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:06.810119 sshd-session[2069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:06.822974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:42:06.843394 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:42:06.849453 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:42:06.890851 systemd-logind[1861]: New session 1 of user core. Sep 12 17:42:06.943972 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:42:06.962315 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:42:06.993287 (systemd)[2165]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:42:06.998943 systemd-logind[1861]: New session c1 of user core. Sep 12 17:42:07.111947 tar[1871]: linux-amd64/README.md Sep 12 17:42:07.151613 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:42:07.161733 systemd-networkd[1813]: eth0: Gained IPv6LL Sep 12 17:42:07.168080 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:42:07.172268 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:42:07.174830 polkitd[2050]: Started polkitd version 126 Sep 12 17:42:07.180859 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:42:07.190633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:07.195195 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:42:07.238300 polkitd[2050]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:42:07.246060 polkitd[2050]: Loading rules from directory /run/polkit-1/rules.d Sep 12 17:42:07.247864 polkitd[2050]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:42:07.248626 polkitd[2050]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 12 17:42:07.249609 polkitd[2050]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:42:07.251186 polkitd[2050]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:42:07.256307 polkitd[2050]: Finished loading, compiling and executing 2 rules Sep 12 17:42:07.256639 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:42:07.263696 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:42:07.267025 polkitd[2050]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:42:07.345666 systemd-hostnamed[1936]: Hostname set to (transient) Sep 12 17:42:07.347584 systemd-resolved[1767]: System hostname changed to 'ip-172-31-19-53'. Sep 12 17:42:07.364646 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:42:07.380679 systemd[2165]: Queued start job for default target default.target. Sep 12 17:42:07.417000 systemd[2165]: Created slice app.slice - User Application Slice. Sep 12 17:42:07.417173 systemd[2165]: Reached target paths.target - Paths. Sep 12 17:42:07.417343 systemd[2165]: Reached target timers.target - Timers. Sep 12 17:42:07.419294 amazon-ssm-agent[2204]: Initializing new seelog logger Sep 12 17:42:07.419821 amazon-ssm-agent[2204]: New Seelog Logger Creation Complete Sep 12 17:42:07.419983 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.420041 amazon-ssm-agent[2204]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.420642 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 processing appconfig overrides Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 processing appconfig overrides Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 processing appconfig overrides Sep 12 17:42:07.422553 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4209 INFO Proxy environment variables: Sep 12 17:42:07.422850 systemd[2165]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:42:07.427160 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.427160 amazon-ssm-agent[2204]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.427322 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 processing appconfig overrides Sep 12 17:42:07.446389 systemd[2165]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:42:07.446586 systemd[2165]: Reached target sockets.target - Sockets. Sep 12 17:42:07.446777 systemd[2165]: Reached target basic.target - Basic System. Sep 12 17:42:07.446904 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:42:07.447614 systemd[2165]: Reached target default.target - Main User Target. Sep 12 17:42:07.447674 systemd[2165]: Startup finished in 431ms. Sep 12 17:42:07.454807 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:42:07.522270 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4209 INFO https_proxy: Sep 12 17:42:07.612049 systemd[1]: Started sshd@1-172.31.19.53:22-139.178.68.195:40034.service - OpenSSH per-connection server daemon (139.178.68.195:40034). Sep 12 17:42:07.620778 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4209 INFO http_proxy: Sep 12 17:42:07.719104 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4209 INFO no_proxy: Sep 12 17:42:07.811105 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.811105 amazon-ssm-agent[2204]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:42:07.811326 amazon-ssm-agent[2204]: 2025/09/12 17:42:07 processing appconfig overrides Sep 12 17:42:07.818695 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4211 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:42:07.825895 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 40034 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:07.829893 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4213 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4831 INFO Agent will take identity from EC2 Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4868 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4868 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4868 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4868 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4868 INFO [Registrar] Starting registrar module Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4890 INFO [EC2Identity] Checking disk for registration info Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4891 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.4891 INFO [EC2Identity] Generating registration keypair Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.7647 INFO [EC2Identity] Checking write access before registering Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.7652 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8108 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8109 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8110 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8110 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8432 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:42:07.844137 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8434 INFO [CredentialRefresher] Credentials ready Sep 12 17:42:07.848776 systemd-logind[1861]: New session 2 of user core. Sep 12 17:42:07.859757 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:42:07.915051 amazon-ssm-agent[2204]: 2025-09-12 17:42:07.8437 INFO [CredentialRefresher] Next credential rotation will be in 29.99999267125 minutes Sep 12 17:42:07.983542 sshd[2248]: Connection closed by 139.178.68.195 port 40034 Sep 12 17:42:07.984214 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:07.989298 systemd[1]: sshd@1-172.31.19.53:22-139.178.68.195:40034.service: Deactivated successfully. Sep 12 17:42:07.992350 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:42:07.995788 systemd-logind[1861]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:42:07.997493 systemd-logind[1861]: Removed session 2. Sep 12 17:42:08.025381 systemd[1]: Started sshd@2-172.31.19.53:22-139.178.68.195:40050.service - OpenSSH per-connection server daemon (139.178.68.195:40050). Sep 12 17:42:08.216739 sshd[2254]: Accepted publickey for core from 139.178.68.195 port 40050 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:08.219684 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:08.228226 systemd-logind[1861]: New session 3 of user core. Sep 12 17:42:08.231802 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:42:08.374048 sshd[2257]: Connection closed by 139.178.68.195 port 40050 Sep 12 17:42:08.375274 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:08.381076 systemd[1]: sshd@2-172.31.19.53:22-139.178.68.195:40050.service: Deactivated successfully. Sep 12 17:42:08.384426 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:42:08.386277 systemd-logind[1861]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:42:08.389347 systemd-logind[1861]: Removed session 3. Sep 12 17:42:08.877551 amazon-ssm-agent[2204]: 2025-09-12 17:42:08.8773 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:42:08.979032 amazon-ssm-agent[2204]: 2025-09-12 17:42:08.8798 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2264) started Sep 12 17:42:09.079764 amazon-ssm-agent[2204]: 2025-09-12 17:42:08.8798 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:42:09.152782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:09.154401 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:42:09.156679 systemd[1]: Startup finished in 2.853s (kernel) + 7.925s (initrd) + 10.605s (userspace) = 21.385s. Sep 12 17:42:09.167238 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:42:09.652412 ntpd[1851]: Listen normally on 7 eth0 [fe80::4fe:36ff:fe95:4657%2]:123 Sep 12 17:42:09.652803 ntpd[1851]: 12 Sep 17:42:09 ntpd[1851]: Listen normally on 7 eth0 [fe80::4fe:36ff:fe95:4657%2]:123 Sep 12 17:42:10.104608 kubelet[2282]: E0912 17:42:10.104561 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:42:10.107429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:42:10.107593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:42:10.108564 systemd[1]: kubelet.service: Consumed 1.121s CPU time, 265.3M memory peak. Sep 12 17:42:13.211077 systemd-resolved[1767]: Clock change detected. Flushing caches. Sep 12 17:42:18.968669 systemd[1]: Started sshd@3-172.31.19.53:22-139.178.68.195:48480.service - OpenSSH per-connection server daemon (139.178.68.195:48480). Sep 12 17:42:19.140821 sshd[2294]: Accepted publickey for core from 139.178.68.195 port 48480 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:19.142403 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:19.147903 systemd-logind[1861]: New session 4 of user core. Sep 12 17:42:19.158179 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:42:19.274427 sshd[2297]: Connection closed by 139.178.68.195 port 48480 Sep 12 17:42:19.275195 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:19.279734 systemd[1]: sshd@3-172.31.19.53:22-139.178.68.195:48480.service: Deactivated successfully. Sep 12 17:42:19.282075 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:42:19.283224 systemd-logind[1861]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:42:19.284757 systemd-logind[1861]: Removed session 4. Sep 12 17:42:19.313438 systemd[1]: Started sshd@4-172.31.19.53:22-139.178.68.195:48482.service - OpenSSH per-connection server daemon (139.178.68.195:48482). Sep 12 17:42:19.479760 sshd[2303]: Accepted publickey for core from 139.178.68.195 port 48482 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:19.481447 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:19.487369 systemd-logind[1861]: New session 5 of user core. Sep 12 17:42:19.493379 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:42:19.608363 sshd[2306]: Connection closed by 139.178.68.195 port 48482 Sep 12 17:42:19.609659 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:19.614396 systemd[1]: sshd@4-172.31.19.53:22-139.178.68.195:48482.service: Deactivated successfully. Sep 12 17:42:19.616262 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:42:19.617358 systemd-logind[1861]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:42:19.619058 systemd-logind[1861]: Removed session 5. Sep 12 17:42:19.644800 systemd[1]: Started sshd@5-172.31.19.53:22-139.178.68.195:48496.service - OpenSSH per-connection server daemon (139.178.68.195:48496). Sep 12 17:42:19.827841 sshd[2312]: Accepted publickey for core from 139.178.68.195 port 48496 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:19.829728 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:19.836116 systemd-logind[1861]: New session 6 of user core. Sep 12 17:42:19.846233 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:42:19.987222 sshd[2315]: Connection closed by 139.178.68.195 port 48496 Sep 12 17:42:19.987912 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:19.992491 systemd[1]: sshd@5-172.31.19.53:22-139.178.68.195:48496.service: Deactivated successfully. Sep 12 17:42:20.001756 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:42:20.003630 systemd-logind[1861]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:42:20.005641 systemd-logind[1861]: Removed session 6. Sep 12 17:42:20.033944 systemd[1]: Started sshd@6-172.31.19.53:22-139.178.68.195:34860.service - OpenSSH per-connection server daemon (139.178.68.195:34860). Sep 12 17:42:20.212587 sshd[2321]: Accepted publickey for core from 139.178.68.195 port 34860 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:20.214453 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:20.219700 systemd-logind[1861]: New session 7 of user core. Sep 12 17:42:20.227506 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:42:20.349295 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:42:20.350149 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:42:20.363394 sudo[2325]: pam_unix(sudo:session): session closed for user root Sep 12 17:42:20.386614 sshd[2324]: Connection closed by 139.178.68.195 port 34860 Sep 12 17:42:20.387646 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:20.392284 systemd[1]: sshd@6-172.31.19.53:22-139.178.68.195:34860.service: Deactivated successfully. Sep 12 17:42:20.394416 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:42:20.396004 systemd-logind[1861]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:42:20.397961 systemd-logind[1861]: Removed session 7. Sep 12 17:42:20.428432 systemd[1]: Started sshd@7-172.31.19.53:22-139.178.68.195:34870.service - OpenSSH per-connection server daemon (139.178.68.195:34870). Sep 12 17:42:20.615686 sshd[2331]: Accepted publickey for core from 139.178.68.195 port 34870 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:20.617787 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:20.623870 systemd-logind[1861]: New session 8 of user core. Sep 12 17:42:20.633322 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:42:20.711906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:42:20.713697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:20.733057 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:42:20.733423 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:42:20.740715 sudo[2337]: pam_unix(sudo:session): session closed for user root Sep 12 17:42:20.748503 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:42:20.749358 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:42:20.764416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:42:20.815747 augenrules[2361]: No rules Sep 12 17:42:20.816870 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:42:20.817200 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:42:20.819248 sudo[2336]: pam_unix(sudo:session): session closed for user root Sep 12 17:42:20.842282 sshd[2334]: Connection closed by 139.178.68.195 port 34870 Sep 12 17:42:20.842811 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:20.848153 systemd[1]: sshd@7-172.31.19.53:22-139.178.68.195:34870.service: Deactivated successfully. Sep 12 17:42:20.848370 systemd-logind[1861]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:42:20.850604 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:42:20.853107 systemd-logind[1861]: Removed session 8. Sep 12 17:42:20.875204 systemd[1]: Started sshd@8-172.31.19.53:22-139.178.68.195:34882.service - OpenSSH per-connection server daemon (139.178.68.195:34882). Sep 12 17:42:21.000562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:21.012497 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:42:21.060854 sshd[2370]: Accepted publickey for core from 139.178.68.195 port 34882 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:42:21.061690 kubelet[2378]: E0912 17:42:21.060306 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:42:21.063002 sshd-session[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:21.068076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:42:21.068263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:42:21.068745 systemd[1]: kubelet.service: Consumed 195ms CPU time, 108.6M memory peak. Sep 12 17:42:21.071287 systemd-logind[1861]: New session 9 of user core. Sep 12 17:42:21.076319 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:42:21.173556 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:42:21.173955 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:42:21.651698 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:42:21.666398 (dockerd)[2405]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:42:21.993338 dockerd[2405]: time="2025-09-12T17:42:21.993274286Z" level=info msg="Starting up" Sep 12 17:42:21.994125 dockerd[2405]: time="2025-09-12T17:42:21.994092364Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:42:22.006312 dockerd[2405]: time="2025-09-12T17:42:22.006234235Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:42:22.032057 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1001088466-merged.mount: Deactivated successfully. Sep 12 17:42:22.076657 dockerd[2405]: time="2025-09-12T17:42:22.076434777Z" level=info msg="Loading containers: start." Sep 12 17:42:22.090962 kernel: Initializing XFRM netlink socket Sep 12 17:42:22.329520 (udev-worker)[2426]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:42:22.379572 systemd-networkd[1813]: docker0: Link UP Sep 12 17:42:22.389529 dockerd[2405]: time="2025-09-12T17:42:22.389460475Z" level=info msg="Loading containers: done." Sep 12 17:42:22.417483 dockerd[2405]: time="2025-09-12T17:42:22.417429796Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:42:22.417676 dockerd[2405]: time="2025-09-12T17:42:22.417511254Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:42:22.417676 dockerd[2405]: time="2025-09-12T17:42:22.417598359Z" level=info msg="Initializing buildkit" Sep 12 17:42:22.459420 dockerd[2405]: time="2025-09-12T17:42:22.459367703Z" level=info msg="Completed buildkit initialization" Sep 12 17:42:22.463943 dockerd[2405]: time="2025-09-12T17:42:22.463886362Z" level=info msg="Daemon has completed initialization" Sep 12 17:42:22.464106 dockerd[2405]: time="2025-09-12T17:42:22.463976624Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:42:22.464339 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:42:23.027753 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3142532947-merged.mount: Deactivated successfully. Sep 12 17:42:23.555483 containerd[1909]: time="2025-09-12T17:42:23.555437919Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:42:24.149379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068541410.mount: Deactivated successfully. Sep 12 17:42:25.822561 containerd[1909]: time="2025-09-12T17:42:25.822493134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:25.823650 containerd[1909]: time="2025-09-12T17:42:25.823612609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:42:25.825253 containerd[1909]: time="2025-09-12T17:42:25.824568752Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:25.827535 containerd[1909]: time="2025-09-12T17:42:25.827485888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:25.828647 containerd[1909]: time="2025-09-12T17:42:25.828609648Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.273123049s" Sep 12 17:42:25.828746 containerd[1909]: time="2025-09-12T17:42:25.828657793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:42:25.829586 containerd[1909]: time="2025-09-12T17:42:25.829558665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:42:27.534009 containerd[1909]: time="2025-09-12T17:42:27.533940436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:27.536347 containerd[1909]: time="2025-09-12T17:42:27.536122598Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:42:27.538820 containerd[1909]: time="2025-09-12T17:42:27.538784427Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:27.546939 containerd[1909]: time="2025-09-12T17:42:27.546411803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:27.547176 containerd[1909]: time="2025-09-12T17:42:27.547152785Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.717558946s" Sep 12 17:42:27.547238 containerd[1909]: time="2025-09-12T17:42:27.547227102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:42:27.547619 containerd[1909]: time="2025-09-12T17:42:27.547587158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:42:28.970495 containerd[1909]: time="2025-09-12T17:42:28.970376677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:28.971718 containerd[1909]: time="2025-09-12T17:42:28.971453788Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:42:28.973365 containerd[1909]: time="2025-09-12T17:42:28.973321922Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:28.975912 containerd[1909]: time="2025-09-12T17:42:28.975873660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:28.977158 containerd[1909]: time="2025-09-12T17:42:28.977123125Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.429503715s" Sep 12 17:42:28.977344 containerd[1909]: time="2025-09-12T17:42:28.977259786Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:42:28.978413 containerd[1909]: time="2025-09-12T17:42:28.978363282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:42:30.114914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602265446.mount: Deactivated successfully. Sep 12 17:42:30.720613 containerd[1909]: time="2025-09-12T17:42:30.720548391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:30.722602 containerd[1909]: time="2025-09-12T17:42:30.722542369Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:42:30.725110 containerd[1909]: time="2025-09-12T17:42:30.725038377Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:30.728461 containerd[1909]: time="2025-09-12T17:42:30.728393337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:30.729477 containerd[1909]: time="2025-09-12T17:42:30.728881412Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.75029201s" Sep 12 17:42:30.729477 containerd[1909]: time="2025-09-12T17:42:30.729035834Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:42:30.729680 containerd[1909]: time="2025-09-12T17:42:30.729631991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:42:31.212405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:42:31.215180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:31.337347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353360980.mount: Deactivated successfully. Sep 12 17:42:31.529867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:31.542942 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:42:31.623952 kubelet[2710]: E0912 17:42:31.623149 2710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:42:31.627538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:42:31.627741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:42:31.628447 systemd[1]: kubelet.service: Consumed 209ms CPU time, 108.1M memory peak. Sep 12 17:42:32.533498 containerd[1909]: time="2025-09-12T17:42:32.533434276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:32.539610 containerd[1909]: time="2025-09-12T17:42:32.539320962Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:42:32.544779 containerd[1909]: time="2025-09-12T17:42:32.544692233Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:32.549833 containerd[1909]: time="2025-09-12T17:42:32.549420803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:32.550403 containerd[1909]: time="2025-09-12T17:42:32.550374511Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.820711006s" Sep 12 17:42:32.550471 containerd[1909]: time="2025-09-12T17:42:32.550408484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:42:32.551403 containerd[1909]: time="2025-09-12T17:42:32.551382806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:42:33.045295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437850248.mount: Deactivated successfully. Sep 12 17:42:33.058030 containerd[1909]: time="2025-09-12T17:42:33.057957695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:33.059953 containerd[1909]: time="2025-09-12T17:42:33.059871102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:42:33.062401 containerd[1909]: time="2025-09-12T17:42:33.062335500Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:33.065746 containerd[1909]: time="2025-09-12T17:42:33.065680351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:33.067035 containerd[1909]: time="2025-09-12T17:42:33.066512200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 515.02503ms" Sep 12 17:42:33.067035 containerd[1909]: time="2025-09-12T17:42:33.066547904Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:42:33.067271 containerd[1909]: time="2025-09-12T17:42:33.067250716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:42:33.618139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782741074.mount: Deactivated successfully. Sep 12 17:42:36.101835 containerd[1909]: time="2025-09-12T17:42:36.101758654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:36.103967 containerd[1909]: time="2025-09-12T17:42:36.103898381Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:42:36.106033 containerd[1909]: time="2025-09-12T17:42:36.105972183Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:36.109977 containerd[1909]: time="2025-09-12T17:42:36.109899052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:36.111581 containerd[1909]: time="2025-09-12T17:42:36.111090583Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.043809511s" Sep 12 17:42:36.111581 containerd[1909]: time="2025-09-12T17:42:36.111124819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:42:37.927396 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:42:38.973618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:38.973884 systemd[1]: kubelet.service: Consumed 209ms CPU time, 108.1M memory peak. Sep 12 17:42:38.976596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:39.013833 systemd[1]: Reload requested from client PID 2845 ('systemctl') (unit session-9.scope)... Sep 12 17:42:39.013853 systemd[1]: Reloading... Sep 12 17:42:39.158959 zram_generator::config[2892]: No configuration found. Sep 12 17:42:39.450477 systemd[1]: Reloading finished in 436 ms. Sep 12 17:42:39.524301 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:42:39.524419 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:42:39.524758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:39.524812 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.3M memory peak. Sep 12 17:42:39.527651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:39.787684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:39.799330 (kubelet)[2953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:42:39.866979 kubelet[2953]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:39.866979 kubelet[2953]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:42:39.866979 kubelet[2953]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:39.872462 kubelet[2953]: I0912 17:42:39.872364 2953 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:42:40.602879 kubelet[2953]: I0912 17:42:40.602814 2953 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:42:40.602879 kubelet[2953]: I0912 17:42:40.602862 2953 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:42:40.603398 kubelet[2953]: I0912 17:42:40.603358 2953 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:42:40.651938 kubelet[2953]: E0912 17:42:40.651632 2953 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:40.654880 kubelet[2953]: I0912 17:42:40.654834 2953 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:42:40.672440 kubelet[2953]: I0912 17:42:40.672402 2953 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:42:40.678394 kubelet[2953]: I0912 17:42:40.678341 2953 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:42:40.681151 kubelet[2953]: I0912 17:42:40.681066 2953 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:42:40.681333 kubelet[2953]: I0912 17:42:40.681134 2953 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:42:40.683394 kubelet[2953]: I0912 17:42:40.683366 2953 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:42:40.683394 kubelet[2953]: I0912 17:42:40.683396 2953 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:42:40.685139 kubelet[2953]: I0912 17:42:40.685103 2953 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:40.690570 kubelet[2953]: I0912 17:42:40.690511 2953 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:42:40.690570 kubelet[2953]: I0912 17:42:40.690554 2953 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:42:40.690570 kubelet[2953]: I0912 17:42:40.690580 2953 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:42:40.690866 kubelet[2953]: I0912 17:42:40.690591 2953 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:42:40.701448 kubelet[2953]: W0912 17:42:40.701239 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:40.701448 kubelet[2953]: E0912 17:42:40.701304 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:40.702483 kubelet[2953]: W0912 17:42:40.702205 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-53&limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:40.702483 kubelet[2953]: E0912 17:42:40.702263 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-53&limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:40.705395 kubelet[2953]: I0912 17:42:40.705371 2953 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:42:40.709892 kubelet[2953]: I0912 17:42:40.709673 2953 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:42:40.709892 kubelet[2953]: W0912 17:42:40.709754 2953 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:42:40.710839 kubelet[2953]: I0912 17:42:40.710810 2953 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:42:40.710839 kubelet[2953]: I0912 17:42:40.710843 2953 server.go:1287] "Started kubelet" Sep 12 17:42:40.713117 kubelet[2953]: I0912 17:42:40.712121 2953 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:42:40.713361 kubelet[2953]: I0912 17:42:40.713321 2953 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:42:40.716827 kubelet[2953]: I0912 17:42:40.716789 2953 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:42:40.720833 kubelet[2953]: I0912 17:42:40.720736 2953 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:42:40.721450 kubelet[2953]: I0912 17:42:40.721421 2953 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:42:40.731750 kubelet[2953]: I0912 17:42:40.727222 2953 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:42:40.731750 kubelet[2953]: I0912 17:42:40.730066 2953 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:42:40.731750 kubelet[2953]: E0912 17:42:40.730476 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-53\" not found" Sep 12 17:42:40.734142 kubelet[2953]: E0912 17:42:40.723886 2953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.53:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-53.186499e45f3dff74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-53,UID:ip-172-31-19-53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-53,},FirstTimestamp:2025-09-12 17:42:40.710827892 +0000 UTC m=+0.907059378,LastTimestamp:2025-09-12 17:42:40.710827892 +0000 UTC m=+0.907059378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-53,}" Sep 12 17:42:40.736829 kubelet[2953]: E0912 17:42:40.734748 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-53?timeout=10s\": dial tcp 172.31.19.53:6443: connect: connection refused" interval="200ms" Sep 12 17:42:40.736829 kubelet[2953]: I0912 17:42:40.735052 2953 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:42:40.736829 kubelet[2953]: I0912 17:42:40.735138 2953 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:42:40.737295 kubelet[2953]: I0912 17:42:40.737230 2953 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:42:40.737295 kubelet[2953]: I0912 17:42:40.737293 2953 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:42:40.742306 kubelet[2953]: W0912 17:42:40.742248 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:40.742453 kubelet[2953]: E0912 17:42:40.742320 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:40.745172 kubelet[2953]: I0912 17:42:40.745135 2953 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:42:40.758944 kubelet[2953]: E0912 17:42:40.758124 2953 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:42:40.762447 kubelet[2953]: I0912 17:42:40.762404 2953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:42:40.765198 kubelet[2953]: I0912 17:42:40.765168 2953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:42:40.765198 kubelet[2953]: I0912 17:42:40.765200 2953 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:42:40.765371 kubelet[2953]: I0912 17:42:40.765222 2953 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:42:40.765371 kubelet[2953]: I0912 17:42:40.765231 2953 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:42:40.765371 kubelet[2953]: E0912 17:42:40.765284 2953 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:42:40.767938 kubelet[2953]: W0912 17:42:40.767834 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:40.767938 kubelet[2953]: E0912 17:42:40.767898 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:40.777998 kubelet[2953]: I0912 17:42:40.777972 2953 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:42:40.778359 kubelet[2953]: I0912 17:42:40.778144 2953 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:42:40.778359 kubelet[2953]: I0912 17:42:40.778171 2953 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:40.784259 kubelet[2953]: I0912 17:42:40.784214 2953 policy_none.go:49] "None policy: Start" Sep 12 17:42:40.784259 kubelet[2953]: I0912 17:42:40.784244 2953 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:42:40.784259 kubelet[2953]: I0912 17:42:40.784260 2953 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:42:40.794386 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:42:40.807216 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:42:40.811988 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:42:40.824014 kubelet[2953]: I0912 17:42:40.823985 2953 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:42:40.824187 kubelet[2953]: I0912 17:42:40.824173 2953 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:42:40.824259 kubelet[2953]: I0912 17:42:40.824188 2953 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:42:40.824593 kubelet[2953]: I0912 17:42:40.824564 2953 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:42:40.826730 kubelet[2953]: E0912 17:42:40.826703 2953 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:42:40.826812 kubelet[2953]: E0912 17:42:40.826751 2953 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-53\" not found" Sep 12 17:42:40.880090 systemd[1]: Created slice kubepods-burstable-podca74f63261e2c78d57487f1d7ba4e9f3.slice - libcontainer container kubepods-burstable-podca74f63261e2c78d57487f1d7ba4e9f3.slice. Sep 12 17:42:40.890100 kubelet[2953]: E0912 17:42:40.889738 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:40.894703 systemd[1]: Created slice kubepods-burstable-pod2b11a77dc40bb1639d17bd1ecad00ee8.slice - libcontainer container kubepods-burstable-pod2b11a77dc40bb1639d17bd1ecad00ee8.slice. Sep 12 17:42:40.898867 kubelet[2953]: E0912 17:42:40.897776 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:40.900526 systemd[1]: Created slice kubepods-burstable-pod70c0ea614b139d31417152227c080549.slice - libcontainer container kubepods-burstable-pod70c0ea614b139d31417152227c080549.slice. Sep 12 17:42:40.903344 kubelet[2953]: E0912 17:42:40.903314 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:40.926625 kubelet[2953]: I0912 17:42:40.926595 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-53" Sep 12 17:42:40.927037 kubelet[2953]: E0912 17:42:40.927003 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.53:6443/api/v1/nodes\": dial tcp 172.31.19.53:6443: connect: connection refused" node="ip-172-31-19-53" Sep 12 17:42:40.935693 kubelet[2953]: E0912 17:42:40.935653 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-53?timeout=10s\": dial tcp 172.31.19.53:6443: connect: connection refused" interval="400ms" Sep 12 17:42:41.038427 kubelet[2953]: I0912 17:42:41.038365 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:41.038427 kubelet[2953]: I0912 17:42:41.038420 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70c0ea614b139d31417152227c080549-ca-certs\") pod \"kube-apiserver-ip-172-31-19-53\" (UID: \"70c0ea614b139d31417152227c080549\") " pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:41.038427 kubelet[2953]: I0912 17:42:41.038438 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70c0ea614b139d31417152227c080549-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-53\" (UID: \"70c0ea614b139d31417152227c080549\") " pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:41.038707 kubelet[2953]: I0912 17:42:41.038455 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:41.038707 kubelet[2953]: I0912 17:42:41.038471 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:41.038707 kubelet[2953]: I0912 17:42:41.038487 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:41.038707 kubelet[2953]: I0912 17:42:41.038503 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:41.038707 kubelet[2953]: I0912 17:42:41.038520 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b11a77dc40bb1639d17bd1ecad00ee8-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-53\" (UID: \"2b11a77dc40bb1639d17bd1ecad00ee8\") " pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:41.038834 kubelet[2953]: I0912 17:42:41.038535 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70c0ea614b139d31417152227c080549-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-53\" (UID: \"70c0ea614b139d31417152227c080549\") " pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:41.129533 kubelet[2953]: I0912 17:42:41.129502 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-53" Sep 12 17:42:41.129939 kubelet[2953]: E0912 17:42:41.129899 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.53:6443/api/v1/nodes\": dial tcp 172.31.19.53:6443: connect: connection refused" node="ip-172-31-19-53" Sep 12 17:42:41.192892 containerd[1909]: time="2025-09-12T17:42:41.192768695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-53,Uid:ca74f63261e2c78d57487f1d7ba4e9f3,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:41.206960 containerd[1909]: time="2025-09-12T17:42:41.206736650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-53,Uid:70c0ea614b139d31417152227c080549,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:41.207244 containerd[1909]: time="2025-09-12T17:42:41.206736658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-53,Uid:2b11a77dc40bb1639d17bd1ecad00ee8,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:41.336781 kubelet[2953]: E0912 17:42:41.336726 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-53?timeout=10s\": dial tcp 172.31.19.53:6443: connect: connection refused" interval="800ms" Sep 12 17:42:41.374137 containerd[1909]: time="2025-09-12T17:42:41.374084109Z" level=info msg="connecting to shim fa90764b63231dabffd9d9a92a290d4659f00981276bbba61a329056afa0127b" address="unix:///run/containerd/s/f3ae275d887f0a6b2b1c2e96b39296e84ac77cfd4718418c0bf0bcbf7ba9ee34" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:42:41.377316 containerd[1909]: time="2025-09-12T17:42:41.377276690Z" level=info msg="connecting to shim 9ed5c00e49faf1f19ce87db27f45343911141637c81f7cd0e573979166496011" address="unix:///run/containerd/s/033f17bdf7ad701fb6b7f3765818969b2e1626b1b3dbe3390263c8ac8ebd99f1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:42:41.382884 containerd[1909]: time="2025-09-12T17:42:41.382686826Z" level=info msg="connecting to shim 8967dbc98d1e0b60dce029a5cd9a789d8f0f078f2d1e06a8590d02f49254d6c7" address="unix:///run/containerd/s/cdf07647fcd7e196f8df1cf5de4f88014ba14cd87177932ea1cacdf8554b9655" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:42:41.508805 systemd[1]: Started cri-containerd-8967dbc98d1e0b60dce029a5cd9a789d8f0f078f2d1e06a8590d02f49254d6c7.scope - libcontainer container 8967dbc98d1e0b60dce029a5cd9a789d8f0f078f2d1e06a8590d02f49254d6c7. Sep 12 17:42:41.510807 systemd[1]: Started cri-containerd-9ed5c00e49faf1f19ce87db27f45343911141637c81f7cd0e573979166496011.scope - libcontainer container 9ed5c00e49faf1f19ce87db27f45343911141637c81f7cd0e573979166496011. Sep 12 17:42:41.512546 systemd[1]: Started cri-containerd-fa90764b63231dabffd9d9a92a290d4659f00981276bbba61a329056afa0127b.scope - libcontainer container fa90764b63231dabffd9d9a92a290d4659f00981276bbba61a329056afa0127b. Sep 12 17:42:41.538371 kubelet[2953]: I0912 17:42:41.538336 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-53" Sep 12 17:42:41.541348 kubelet[2953]: E0912 17:42:41.541284 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.53:6443/api/v1/nodes\": dial tcp 172.31.19.53:6443: connect: connection refused" node="ip-172-31-19-53" Sep 12 17:42:41.624070 containerd[1909]: time="2025-09-12T17:42:41.623971786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-53,Uid:ca74f63261e2c78d57487f1d7ba4e9f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8967dbc98d1e0b60dce029a5cd9a789d8f0f078f2d1e06a8590d02f49254d6c7\"" Sep 12 17:42:41.636399 containerd[1909]: time="2025-09-12T17:42:41.636309149Z" level=info msg="CreateContainer within sandbox \"8967dbc98d1e0b60dce029a5cd9a789d8f0f078f2d1e06a8590d02f49254d6c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:42:41.643209 containerd[1909]: time="2025-09-12T17:42:41.643166686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-53,Uid:70c0ea614b139d31417152227c080549,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ed5c00e49faf1f19ce87db27f45343911141637c81f7cd0e573979166496011\"" Sep 12 17:42:41.648098 containerd[1909]: time="2025-09-12T17:42:41.648031371Z" level=info msg="CreateContainer within sandbox \"9ed5c00e49faf1f19ce87db27f45343911141637c81f7cd0e573979166496011\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:42:41.674630 containerd[1909]: time="2025-09-12T17:42:41.674066042Z" level=info msg="Container 7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:42:41.675632 containerd[1909]: time="2025-09-12T17:42:41.675596794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-53,Uid:2b11a77dc40bb1639d17bd1ecad00ee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa90764b63231dabffd9d9a92a290d4659f00981276bbba61a329056afa0127b\"" Sep 12 17:42:41.686759 containerd[1909]: time="2025-09-12T17:42:41.686716169Z" level=info msg="Container b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:42:41.697074 containerd[1909]: time="2025-09-12T17:42:41.697029552Z" level=info msg="CreateContainer within sandbox \"fa90764b63231dabffd9d9a92a290d4659f00981276bbba61a329056afa0127b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:42:41.722738 containerd[1909]: time="2025-09-12T17:42:41.722633722Z" level=info msg="CreateContainer within sandbox \"9ed5c00e49faf1f19ce87db27f45343911141637c81f7cd0e573979166496011\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a\"" Sep 12 17:42:41.727315 containerd[1909]: time="2025-09-12T17:42:41.724164728Z" level=info msg="StartContainer for \"b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a\"" Sep 12 17:42:41.727315 containerd[1909]: time="2025-09-12T17:42:41.726515004Z" level=info msg="CreateContainer within sandbox \"8967dbc98d1e0b60dce029a5cd9a789d8f0f078f2d1e06a8590d02f49254d6c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff\"" Sep 12 17:42:41.728590 containerd[1909]: time="2025-09-12T17:42:41.728550014Z" level=info msg="connecting to shim b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a" address="unix:///run/containerd/s/033f17bdf7ad701fb6b7f3765818969b2e1626b1b3dbe3390263c8ac8ebd99f1" protocol=ttrpc version=3 Sep 12 17:42:41.735162 containerd[1909]: time="2025-09-12T17:42:41.735129816Z" level=info msg="StartContainer for \"7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff\"" Sep 12 17:42:41.739543 containerd[1909]: time="2025-09-12T17:42:41.739429598Z" level=info msg="connecting to shim 7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff" address="unix:///run/containerd/s/cdf07647fcd7e196f8df1cf5de4f88014ba14cd87177932ea1cacdf8554b9655" protocol=ttrpc version=3 Sep 12 17:42:41.749134 containerd[1909]: time="2025-09-12T17:42:41.749097038Z" level=info msg="Container 1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:42:41.765222 systemd[1]: Started cri-containerd-b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a.scope - libcontainer container b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a. Sep 12 17:42:41.770599 containerd[1909]: time="2025-09-12T17:42:41.770544460Z" level=info msg="CreateContainer within sandbox \"fa90764b63231dabffd9d9a92a290d4659f00981276bbba61a329056afa0127b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05\"" Sep 12 17:42:41.771465 containerd[1909]: time="2025-09-12T17:42:41.771361265Z" level=info msg="StartContainer for \"1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05\"" Sep 12 17:42:41.778605 containerd[1909]: time="2025-09-12T17:42:41.778557694Z" level=info msg="connecting to shim 1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05" address="unix:///run/containerd/s/f3ae275d887f0a6b2b1c2e96b39296e84ac77cfd4718418c0bf0bcbf7ba9ee34" protocol=ttrpc version=3 Sep 12 17:42:41.794337 systemd[1]: Started cri-containerd-7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff.scope - libcontainer container 7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff. Sep 12 17:42:41.833386 systemd[1]: Started cri-containerd-1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05.scope - libcontainer container 1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05. Sep 12 17:42:41.837511 kubelet[2953]: W0912 17:42:41.837412 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:41.837511 kubelet[2953]: E0912 17:42:41.837492 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:41.870552 kubelet[2953]: W0912 17:42:41.870425 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:41.870552 kubelet[2953]: E0912 17:42:41.870509 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:41.893939 containerd[1909]: time="2025-09-12T17:42:41.893885459Z" level=info msg="StartContainer for \"b056e370c6bbae50883ac79a32241acb98f3552619140a20e83d0e9cd2537c1a\" returns successfully" Sep 12 17:42:41.937337 containerd[1909]: time="2025-09-12T17:42:41.937241904Z" level=info msg="StartContainer for \"7d3a50ad29328faef13a5e6a3ecf1dad8c7d4f9ae95cdce959068f5929f72cff\" returns successfully" Sep 12 17:42:41.940083 containerd[1909]: time="2025-09-12T17:42:41.940042429Z" level=info msg="StartContainer for \"1e06946e84e24f8be77285da895a92451b1e51f1447bc4276aee43e0e6126c05\" returns successfully" Sep 12 17:42:42.040228 kubelet[2953]: W0912 17:42:42.039847 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:42.040228 kubelet[2953]: E0912 17:42:42.039971 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:42.071938 kubelet[2953]: W0912 17:42:42.070887 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-53&limit=500&resourceVersion=0": dial tcp 172.31.19.53:6443: connect: connection refused Sep 12 17:42:42.071938 kubelet[2953]: E0912 17:42:42.071012 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-53&limit=500&resourceVersion=0\": dial tcp 172.31.19.53:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:42.138721 kubelet[2953]: E0912 17:42:42.137969 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-53?timeout=10s\": dial tcp 172.31.19.53:6443: connect: connection refused" interval="1.6s" Sep 12 17:42:42.344883 kubelet[2953]: I0912 17:42:42.344783 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-53" Sep 12 17:42:42.345301 kubelet[2953]: E0912 17:42:42.345172 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.53:6443/api/v1/nodes\": dial tcp 172.31.19.53:6443: connect: connection refused" node="ip-172-31-19-53" Sep 12 17:42:42.794952 kubelet[2953]: E0912 17:42:42.794898 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:42.805092 kubelet[2953]: E0912 17:42:42.805060 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:42.807070 kubelet[2953]: E0912 17:42:42.807039 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:43.813617 kubelet[2953]: E0912 17:42:43.813578 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:43.814083 kubelet[2953]: E0912 17:42:43.814012 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:43.814309 kubelet[2953]: E0912 17:42:43.814287 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:43.947727 kubelet[2953]: I0912 17:42:43.947693 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-53" Sep 12 17:42:44.446746 kubelet[2953]: E0912 17:42:44.446708 2953 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-53\" not found" node="ip-172-31-19-53" Sep 12 17:42:44.509691 kubelet[2953]: I0912 17:42:44.509656 2953 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-53" Sep 12 17:42:44.509691 kubelet[2953]: E0912 17:42:44.509694 2953 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-53\": node \"ip-172-31-19-53\" not found" Sep 12 17:42:44.530736 kubelet[2953]: E0912 17:42:44.530690 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-53\" not found" Sep 12 17:42:44.631817 kubelet[2953]: E0912 17:42:44.631768 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-53\" not found" Sep 12 17:42:44.732036 kubelet[2953]: E0912 17:42:44.731990 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-53\" not found" Sep 12 17:42:44.810098 kubelet[2953]: I0912 17:42:44.810022 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:44.810379 kubelet[2953]: I0912 17:42:44.810078 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:44.817660 kubelet[2953]: E0912 17:42:44.817613 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:44.817660 kubelet[2953]: E0912 17:42:44.817618 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:44.832762 kubelet[2953]: I0912 17:42:44.831937 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:44.835940 kubelet[2953]: E0912 17:42:44.835902 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:44.837939 kubelet[2953]: I0912 17:42:44.836104 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:44.838802 kubelet[2953]: E0912 17:42:44.838775 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:44.838912 kubelet[2953]: I0912 17:42:44.838892 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:44.841263 kubelet[2953]: E0912 17:42:44.841232 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:45.696326 kubelet[2953]: I0912 17:42:45.696284 2953 apiserver.go:52] "Watching apiserver" Sep 12 17:42:45.737897 kubelet[2953]: I0912 17:42:45.737830 2953 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:42:47.170111 kubelet[2953]: I0912 17:42:47.170076 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:47.270742 systemd[1]: Reload requested from client PID 3221 ('systemctl') (unit session-9.scope)... Sep 12 17:42:47.270765 systemd[1]: Reloading... Sep 12 17:42:47.398996 zram_generator::config[3265]: No configuration found. Sep 12 17:42:47.684823 systemd[1]: Reloading finished in 413 ms. Sep 12 17:42:47.713990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:47.733859 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:42:47.734179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:47.734247 systemd[1]: kubelet.service: Consumed 1.139s CPU time, 128.4M memory peak. Sep 12 17:42:47.737384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:48.139482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:48.153804 (kubelet)[3325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:42:48.222947 kubelet[3325]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:48.222947 kubelet[3325]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:42:48.222947 kubelet[3325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:48.222947 kubelet[3325]: I0912 17:42:48.222118 3325 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:42:48.233700 kubelet[3325]: I0912 17:42:48.233668 3325 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:42:48.233849 kubelet[3325]: I0912 17:42:48.233840 3325 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:42:48.234662 kubelet[3325]: I0912 17:42:48.234637 3325 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:42:48.237263 kubelet[3325]: I0912 17:42:48.237238 3325 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:42:48.248516 kubelet[3325]: I0912 17:42:48.248460 3325 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:42:48.252652 kubelet[3325]: I0912 17:42:48.252602 3325 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:42:48.256698 kubelet[3325]: I0912 17:42:48.256665 3325 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:42:48.257186 kubelet[3325]: I0912 17:42:48.257152 3325 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:42:48.257508 kubelet[3325]: I0912 17:42:48.257267 3325 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:42:48.257713 kubelet[3325]: I0912 17:42:48.257699 3325 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:42:48.257784 kubelet[3325]: I0912 17:42:48.257777 3325 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:42:48.257892 kubelet[3325]: I0912 17:42:48.257884 3325 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:48.258241 kubelet[3325]: I0912 17:42:48.258153 3325 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:42:48.262948 kubelet[3325]: I0912 17:42:48.258180 3325 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:42:48.262948 kubelet[3325]: I0912 17:42:48.261992 3325 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:42:48.262948 kubelet[3325]: I0912 17:42:48.262015 3325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:42:48.265413 kubelet[3325]: I0912 17:42:48.265224 3325 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:42:48.266232 kubelet[3325]: I0912 17:42:48.266195 3325 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:42:48.267105 kubelet[3325]: I0912 17:42:48.267074 3325 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:42:48.267305 kubelet[3325]: I0912 17:42:48.267294 3325 server.go:1287] "Started kubelet" Sep 12 17:42:48.272818 kubelet[3325]: I0912 17:42:48.272787 3325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:42:48.292407 kubelet[3325]: I0912 17:42:48.291873 3325 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:42:48.300862 kubelet[3325]: I0912 17:42:48.298740 3325 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:42:48.310840 kubelet[3325]: I0912 17:42:48.309544 3325 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:42:48.313282 kubelet[3325]: I0912 17:42:48.311257 3325 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:42:48.316265 kubelet[3325]: I0912 17:42:48.315003 3325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:42:48.316700 kubelet[3325]: I0912 17:42:48.316618 3325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:42:48.316700 kubelet[3325]: I0912 17:42:48.316664 3325 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:42:48.323951 kubelet[3325]: I0912 17:42:48.322366 3325 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:42:48.323951 kubelet[3325]: I0912 17:42:48.323471 3325 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:42:48.323951 kubelet[3325]: I0912 17:42:48.323487 3325 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:42:48.323951 kubelet[3325]: E0912 17:42:48.323552 3325 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:42:48.327179 kubelet[3325]: I0912 17:42:48.327145 3325 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:42:48.328205 kubelet[3325]: E0912 17:42:48.327448 3325 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-53\" not found" Sep 12 17:42:48.330301 kubelet[3325]: I0912 17:42:48.330263 3325 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:42:48.330430 kubelet[3325]: I0912 17:42:48.330412 3325 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:42:48.349942 kubelet[3325]: I0912 17:42:48.349367 3325 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:42:48.350201 kubelet[3325]: I0912 17:42:48.350175 3325 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:42:48.353253 kubelet[3325]: E0912 17:42:48.352818 3325 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:42:48.354270 kubelet[3325]: I0912 17:42:48.354208 3325 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:42:48.424817 kubelet[3325]: E0912 17:42:48.424691 3325 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:42:48.440473 kubelet[3325]: I0912 17:42:48.440407 3325 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:42:48.440473 kubelet[3325]: I0912 17:42:48.440433 3325 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:42:48.440473 kubelet[3325]: I0912 17:42:48.440456 3325 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:48.440794 kubelet[3325]: I0912 17:42:48.440732 3325 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:42:48.440794 kubelet[3325]: I0912 17:42:48.440748 3325 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:42:48.440794 kubelet[3325]: I0912 17:42:48.440774 3325 policy_none.go:49] "None policy: Start" Sep 12 17:42:48.440794 kubelet[3325]: I0912 17:42:48.440787 3325 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:42:48.441168 kubelet[3325]: I0912 17:42:48.440800 3325 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:42:48.441168 kubelet[3325]: I0912 17:42:48.441115 3325 state_mem.go:75] "Updated machine memory state" Sep 12 17:42:48.448294 kubelet[3325]: I0912 17:42:48.447707 3325 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:42:48.448294 kubelet[3325]: I0912 17:42:48.447899 3325 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:42:48.450222 kubelet[3325]: I0912 17:42:48.447912 3325 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:42:48.450655 kubelet[3325]: I0912 17:42:48.450617 3325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:42:48.452341 kubelet[3325]: E0912 17:42:48.452238 3325 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:42:48.573143 kubelet[3325]: I0912 17:42:48.572864 3325 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-53" Sep 12 17:42:48.592474 kubelet[3325]: I0912 17:42:48.592440 3325 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-53" Sep 12 17:42:48.593306 kubelet[3325]: I0912 17:42:48.593250 3325 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-53" Sep 12 17:42:48.625937 kubelet[3325]: I0912 17:42:48.625896 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:48.629909 kubelet[3325]: I0912 17:42:48.629525 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:48.630161 kubelet[3325]: I0912 17:42:48.630143 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:48.639414 kubelet[3325]: E0912 17:42:48.639316 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-53\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:48.641139 kubelet[3325]: I0912 17:42:48.640714 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:48.641139 kubelet[3325]: I0912 17:42:48.640769 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b11a77dc40bb1639d17bd1ecad00ee8-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-53\" (UID: \"2b11a77dc40bb1639d17bd1ecad00ee8\") " pod="kube-system/kube-scheduler-ip-172-31-19-53" Sep 12 17:42:48.641139 kubelet[3325]: I0912 17:42:48.640797 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70c0ea614b139d31417152227c080549-ca-certs\") pod \"kube-apiserver-ip-172-31-19-53\" (UID: \"70c0ea614b139d31417152227c080549\") " pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:48.641139 kubelet[3325]: I0912 17:42:48.640827 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:48.641139 kubelet[3325]: I0912 17:42:48.640857 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:48.641378 kubelet[3325]: I0912 17:42:48.640882 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:48.641378 kubelet[3325]: I0912 17:42:48.640907 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70c0ea614b139d31417152227c080549-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-53\" (UID: \"70c0ea614b139d31417152227c080549\") " pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:48.641498 kubelet[3325]: I0912 17:42:48.641479 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70c0ea614b139d31417152227c080549-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-53\" (UID: \"70c0ea614b139d31417152227c080549\") " pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:48.641573 kubelet[3325]: I0912 17:42:48.641558 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca74f63261e2c78d57487f1d7ba4e9f3-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-53\" (UID: \"ca74f63261e2c78d57487f1d7ba4e9f3\") " pod="kube-system/kube-controller-manager-ip-172-31-19-53" Sep 12 17:42:49.263801 kubelet[3325]: I0912 17:42:49.263748 3325 apiserver.go:52] "Watching apiserver" Sep 12 17:42:49.330395 kubelet[3325]: I0912 17:42:49.330350 3325 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:42:49.388153 kubelet[3325]: I0912 17:42:49.388097 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:49.401540 kubelet[3325]: E0912 17:42:49.401328 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-53\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-53" Sep 12 17:42:49.440510 kubelet[3325]: I0912 17:42:49.440394 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-53" podStartSLOduration=2.440374356 podStartE2EDuration="2.440374356s" podCreationTimestamp="2025-09-12 17:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:49.43430647 +0000 UTC m=+1.272498796" watchObservedRunningTime="2025-09-12 17:42:49.440374356 +0000 UTC m=+1.278566677" Sep 12 17:42:49.452215 kubelet[3325]: I0912 17:42:49.452152 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-53" podStartSLOduration=1.451899063 podStartE2EDuration="1.451899063s" podCreationTimestamp="2025-09-12 17:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:49.450856167 +0000 UTC m=+1.289048487" watchObservedRunningTime="2025-09-12 17:42:49.451899063 +0000 UTC m=+1.290091382" Sep 12 17:42:49.486898 kubelet[3325]: I0912 17:42:49.486837 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-53" podStartSLOduration=1.486813615 podStartE2EDuration="1.486813615s" podCreationTimestamp="2025-09-12 17:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:49.470744541 +0000 UTC m=+1.308936863" watchObservedRunningTime="2025-09-12 17:42:49.486813615 +0000 UTC m=+1.325005937" Sep 12 17:42:51.333500 update_engine[1867]: I20250912 17:42:51.332882 1867 update_attempter.cc:509] Updating boot flags... Sep 12 17:42:52.400266 kubelet[3325]: I0912 17:42:52.400230 3325 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:42:52.400821 containerd[1909]: time="2025-09-12T17:42:52.400779621Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:42:52.401436 kubelet[3325]: I0912 17:42:52.401349 3325 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:42:53.265253 systemd[1]: Created slice kubepods-besteffort-pod8a8d713d_4bb8_47c2_8958_d887b85c34e4.slice - libcontainer container kubepods-besteffort-pod8a8d713d_4bb8_47c2_8958_d887b85c34e4.slice. Sep 12 17:42:53.273500 kubelet[3325]: I0912 17:42:53.273455 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a8d713d-4bb8-47c2-8958-d887b85c34e4-kube-proxy\") pod \"kube-proxy-zztdr\" (UID: \"8a8d713d-4bb8-47c2-8958-d887b85c34e4\") " pod="kube-system/kube-proxy-zztdr" Sep 12 17:42:53.273799 kubelet[3325]: I0912 17:42:53.273701 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a8d713d-4bb8-47c2-8958-d887b85c34e4-xtables-lock\") pod \"kube-proxy-zztdr\" (UID: \"8a8d713d-4bb8-47c2-8958-d887b85c34e4\") " pod="kube-system/kube-proxy-zztdr" Sep 12 17:42:53.273799 kubelet[3325]: I0912 17:42:53.273752 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a8d713d-4bb8-47c2-8958-d887b85c34e4-lib-modules\") pod \"kube-proxy-zztdr\" (UID: \"8a8d713d-4bb8-47c2-8958-d887b85c34e4\") " pod="kube-system/kube-proxy-zztdr" Sep 12 17:42:53.274018 kubelet[3325]: I0912 17:42:53.273780 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhhm\" (UniqueName: \"kubernetes.io/projected/8a8d713d-4bb8-47c2-8958-d887b85c34e4-kube-api-access-kzhhm\") pod \"kube-proxy-zztdr\" (UID: \"8a8d713d-4bb8-47c2-8958-d887b85c34e4\") " pod="kube-system/kube-proxy-zztdr" Sep 12 17:42:53.575738 containerd[1909]: time="2025-09-12T17:42:53.575571899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zztdr,Uid:8a8d713d-4bb8-47c2-8958-d887b85c34e4,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:53.627365 systemd[1]: Created slice kubepods-besteffort-pod65fbd751_abab_4d94_ac78_4b140d216ebe.slice - libcontainer container kubepods-besteffort-pod65fbd751_abab_4d94_ac78_4b140d216ebe.slice. Sep 12 17:42:53.633934 containerd[1909]: time="2025-09-12T17:42:53.633436986Z" level=info msg="connecting to shim 3431722dd422341f3ccb951d514b4f490ece79ce2a554de9dcf023d620fbdc4e" address="unix:///run/containerd/s/f45cb0d6e5942b2a1bc3dfc34fa7b7ae1a8b1f7c1ed5d35f1a8c3fda30453846" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:42:53.680049 kubelet[3325]: I0912 17:42:53.679829 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7xp\" (UniqueName: \"kubernetes.io/projected/65fbd751-abab-4d94-ac78-4b140d216ebe-kube-api-access-wc7xp\") pod \"tigera-operator-755d956888-nkl4q\" (UID: \"65fbd751-abab-4d94-ac78-4b140d216ebe\") " pod="tigera-operator/tigera-operator-755d956888-nkl4q" Sep 12 17:42:53.680049 kubelet[3325]: I0912 17:42:53.679875 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65fbd751-abab-4d94-ac78-4b140d216ebe-var-lib-calico\") pod \"tigera-operator-755d956888-nkl4q\" (UID: \"65fbd751-abab-4d94-ac78-4b140d216ebe\") " pod="tigera-operator/tigera-operator-755d956888-nkl4q" Sep 12 17:42:53.684171 systemd[1]: Started cri-containerd-3431722dd422341f3ccb951d514b4f490ece79ce2a554de9dcf023d620fbdc4e.scope - libcontainer container 3431722dd422341f3ccb951d514b4f490ece79ce2a554de9dcf023d620fbdc4e. Sep 12 17:42:53.731224 containerd[1909]: time="2025-09-12T17:42:53.731145917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zztdr,Uid:8a8d713d-4bb8-47c2-8958-d887b85c34e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3431722dd422341f3ccb951d514b4f490ece79ce2a554de9dcf023d620fbdc4e\"" Sep 12 17:42:53.735356 containerd[1909]: time="2025-09-12T17:42:53.735302215Z" level=info msg="CreateContainer within sandbox \"3431722dd422341f3ccb951d514b4f490ece79ce2a554de9dcf023d620fbdc4e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:42:53.761626 containerd[1909]: time="2025-09-12T17:42:53.761418438Z" level=info msg="Container aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:42:53.767753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216950706.mount: Deactivated successfully. Sep 12 17:42:53.778693 containerd[1909]: time="2025-09-12T17:42:53.778632766Z" level=info msg="CreateContainer within sandbox \"3431722dd422341f3ccb951d514b4f490ece79ce2a554de9dcf023d620fbdc4e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91\"" Sep 12 17:42:53.779695 containerd[1909]: time="2025-09-12T17:42:53.779668163Z" level=info msg="StartContainer for \"aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91\"" Sep 12 17:42:53.782936 containerd[1909]: time="2025-09-12T17:42:53.782873025Z" level=info msg="connecting to shim aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91" address="unix:///run/containerd/s/f45cb0d6e5942b2a1bc3dfc34fa7b7ae1a8b1f7c1ed5d35f1a8c3fda30453846" protocol=ttrpc version=3 Sep 12 17:42:53.810151 systemd[1]: Started cri-containerd-aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91.scope - libcontainer container aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91. Sep 12 17:42:53.862982 containerd[1909]: time="2025-09-12T17:42:53.862627674Z" level=info msg="StartContainer for \"aa6ddd910ecf40293091ced900a0bc109c46c0b49742cdbdd6f8859baf0dff91\" returns successfully" Sep 12 17:42:53.938237 containerd[1909]: time="2025-09-12T17:42:53.938185845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-nkl4q,Uid:65fbd751-abab-4d94-ac78-4b140d216ebe,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:42:53.970912 containerd[1909]: time="2025-09-12T17:42:53.970789768Z" level=info msg="connecting to shim ca6584059693898d96d5e8c4dde83718af8e66d1bf6b7a3ac7b72040841aa453" address="unix:///run/containerd/s/0f16c1bdc2d84a1f8f660e145452d98773e768c502cbf72a1de0c2a9a7849235" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:42:54.009409 systemd[1]: Started cri-containerd-ca6584059693898d96d5e8c4dde83718af8e66d1bf6b7a3ac7b72040841aa453.scope - libcontainer container ca6584059693898d96d5e8c4dde83718af8e66d1bf6b7a3ac7b72040841aa453. Sep 12 17:42:54.074180 containerd[1909]: time="2025-09-12T17:42:54.074087388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-nkl4q,Uid:65fbd751-abab-4d94-ac78-4b140d216ebe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ca6584059693898d96d5e8c4dde83718af8e66d1bf6b7a3ac7b72040841aa453\"" Sep 12 17:42:54.077424 containerd[1909]: time="2025-09-12T17:42:54.077387082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:42:54.407105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3550513281.mount: Deactivated successfully. Sep 12 17:42:54.441951 kubelet[3325]: I0912 17:42:54.441387 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zztdr" podStartSLOduration=1.44136441 podStartE2EDuration="1.44136441s" podCreationTimestamp="2025-09-12 17:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:54.439610449 +0000 UTC m=+6.277802771" watchObservedRunningTime="2025-09-12 17:42:54.44136441 +0000 UTC m=+6.279556732" Sep 12 17:42:55.518083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015651039.mount: Deactivated successfully. Sep 12 17:42:56.925267 containerd[1909]: time="2025-09-12T17:42:56.925208802Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:56.926842 containerd[1909]: time="2025-09-12T17:42:56.926799874Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:42:56.929322 containerd[1909]: time="2025-09-12T17:42:56.929273136Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:56.933155 containerd[1909]: time="2025-09-12T17:42:56.932090359Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:56.933155 containerd[1909]: time="2025-09-12T17:42:56.932861425Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.854541753s" Sep 12 17:42:56.933155 containerd[1909]: time="2025-09-12T17:42:56.932896578Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:42:56.936363 containerd[1909]: time="2025-09-12T17:42:56.936317023Z" level=info msg="CreateContainer within sandbox \"ca6584059693898d96d5e8c4dde83718af8e66d1bf6b7a3ac7b72040841aa453\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:42:56.963422 containerd[1909]: time="2025-09-12T17:42:56.961480936Z" level=info msg="Container eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:42:56.973275 containerd[1909]: time="2025-09-12T17:42:56.973227796Z" level=info msg="CreateContainer within sandbox \"ca6584059693898d96d5e8c4dde83718af8e66d1bf6b7a3ac7b72040841aa453\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f\"" Sep 12 17:42:56.973822 containerd[1909]: time="2025-09-12T17:42:56.973758577Z" level=info msg="StartContainer for \"eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f\"" Sep 12 17:42:56.974732 containerd[1909]: time="2025-09-12T17:42:56.974697900Z" level=info msg="connecting to shim eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f" address="unix:///run/containerd/s/0f16c1bdc2d84a1f8f660e145452d98773e768c502cbf72a1de0c2a9a7849235" protocol=ttrpc version=3 Sep 12 17:42:57.004169 systemd[1]: Started cri-containerd-eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f.scope - libcontainer container eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f. Sep 12 17:42:57.056029 containerd[1909]: time="2025-09-12T17:42:57.055972759Z" level=info msg="StartContainer for \"eb4fb18a20fd3d6266570a30c404c4d407ca1baaa0176657ae2a02794b02ad5f\" returns successfully" Sep 12 17:42:57.443979 kubelet[3325]: I0912 17:42:57.443696 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-nkl4q" podStartSLOduration=1.585718114 podStartE2EDuration="4.443672876s" podCreationTimestamp="2025-09-12 17:42:53 +0000 UTC" firstStartedPulling="2025-09-12 17:42:54.076272977 +0000 UTC m=+5.914465288" lastFinishedPulling="2025-09-12 17:42:56.934227753 +0000 UTC m=+8.772420050" observedRunningTime="2025-09-12 17:42:57.42925013 +0000 UTC m=+9.267442456" watchObservedRunningTime="2025-09-12 17:42:57.443672876 +0000 UTC m=+9.281865196" Sep 12 17:43:03.695211 sudo[2386]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:03.717063 sshd[2385]: Connection closed by 139.178.68.195 port 34882 Sep 12 17:43:03.718665 sshd-session[2370]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:03.726268 systemd[1]: sshd@8-172.31.19.53:22-139.178.68.195:34882.service: Deactivated successfully. Sep 12 17:43:03.730694 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:43:03.731307 systemd[1]: session-9.scope: Consumed 5.296s CPU time, 148.7M memory peak. Sep 12 17:43:03.735792 systemd-logind[1861]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:43:03.742856 systemd-logind[1861]: Removed session 9. Sep 12 17:43:08.800178 systemd[1]: Created slice kubepods-besteffort-podd29a4ed9_26bc_4297_a889_6720fb55981d.slice - libcontainer container kubepods-besteffort-podd29a4ed9_26bc_4297_a889_6720fb55981d.slice. Sep 12 17:43:08.855471 kubelet[3325]: I0912 17:43:08.855314 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d29a4ed9-26bc-4297-a889-6720fb55981d-typha-certs\") pod \"calico-typha-86979bc468-n5whl\" (UID: \"d29a4ed9-26bc-4297-a889-6720fb55981d\") " pod="calico-system/calico-typha-86979bc468-n5whl" Sep 12 17:43:08.855471 kubelet[3325]: I0912 17:43:08.855375 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d29a4ed9-26bc-4297-a889-6720fb55981d-tigera-ca-bundle\") pod \"calico-typha-86979bc468-n5whl\" (UID: \"d29a4ed9-26bc-4297-a889-6720fb55981d\") " pod="calico-system/calico-typha-86979bc468-n5whl" Sep 12 17:43:08.855471 kubelet[3325]: I0912 17:43:08.855402 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7tcc\" (UniqueName: \"kubernetes.io/projected/d29a4ed9-26bc-4297-a889-6720fb55981d-kube-api-access-b7tcc\") pod \"calico-typha-86979bc468-n5whl\" (UID: \"d29a4ed9-26bc-4297-a889-6720fb55981d\") " pod="calico-system/calico-typha-86979bc468-n5whl" Sep 12 17:43:09.042984 systemd[1]: Created slice kubepods-besteffort-pod00e501c7_f98b_4d9b_8ff5_7ebffce42635.slice - libcontainer container kubepods-besteffort-pod00e501c7_f98b_4d9b_8ff5_7ebffce42635.slice. Sep 12 17:43:09.057362 kubelet[3325]: I0912 17:43:09.057235 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-var-run-calico\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057362 kubelet[3325]: I0912 17:43:09.057297 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-xtables-lock\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057362 kubelet[3325]: I0912 17:43:09.057321 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-cni-bin-dir\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057362 kubelet[3325]: I0912 17:43:09.057345 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-cni-net-dir\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057623 kubelet[3325]: I0912 17:43:09.057367 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-cni-log-dir\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057623 kubelet[3325]: I0912 17:43:09.057393 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-policysync\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057623 kubelet[3325]: I0912 17:43:09.057413 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-var-lib-calico\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057623 kubelet[3325]: I0912 17:43:09.057438 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdz2\" (UniqueName: \"kubernetes.io/projected/00e501c7-f98b-4d9b-8ff5-7ebffce42635-kube-api-access-5cdz2\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057623 kubelet[3325]: I0912 17:43:09.057464 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-flexvol-driver-host\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057827 kubelet[3325]: I0912 17:43:09.057485 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00e501c7-f98b-4d9b-8ff5-7ebffce42635-lib-modules\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057827 kubelet[3325]: I0912 17:43:09.057507 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00e501c7-f98b-4d9b-8ff5-7ebffce42635-node-certs\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.057827 kubelet[3325]: I0912 17:43:09.057533 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00e501c7-f98b-4d9b-8ff5-7ebffce42635-tigera-ca-bundle\") pod \"calico-node-q56bp\" (UID: \"00e501c7-f98b-4d9b-8ff5-7ebffce42635\") " pod="calico-system/calico-node-q56bp" Sep 12 17:43:09.119719 containerd[1909]: time="2025-09-12T17:43:09.119439470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86979bc468-n5whl,Uid:d29a4ed9-26bc-4297-a889-6720fb55981d,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:09.187961 kubelet[3325]: E0912 17:43:09.187845 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.191583 kubelet[3325]: W0912 17:43:09.191529 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.202770 kubelet[3325]: E0912 17:43:09.200406 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.202770 kubelet[3325]: E0912 17:43:09.200514 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.202770 kubelet[3325]: W0912 17:43:09.200530 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.202770 kubelet[3325]: E0912 17:43:09.200556 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.202770 kubelet[3325]: E0912 17:43:09.202670 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.202770 kubelet[3325]: W0912 17:43:09.202692 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.202770 kubelet[3325]: E0912 17:43:09.202717 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.222765 containerd[1909]: time="2025-09-12T17:43:09.222704190Z" level=info msg="connecting to shim 8237080e0783d1da9de93026e08c4f42d6647f355e15cddb3eb7f3a2085bf86b" address="unix:///run/containerd/s/b34de46c872f369059eb2c193e55ada3f8792bd7fe49494c0a414e9af7c8880f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:09.227038 kubelet[3325]: E0912 17:43:09.226962 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.228563 kubelet[3325]: W0912 17:43:09.228412 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.228563 kubelet[3325]: E0912 17:43:09.228451 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.281246 systemd[1]: Started cri-containerd-8237080e0783d1da9de93026e08c4f42d6647f355e15cddb3eb7f3a2085bf86b.scope - libcontainer container 8237080e0783d1da9de93026e08c4f42d6647f355e15cddb3eb7f3a2085bf86b. Sep 12 17:43:09.285543 kubelet[3325]: E0912 17:43:09.284313 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:09.350013 kubelet[3325]: E0912 17:43:09.349290 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.350013 kubelet[3325]: W0912 17:43:09.349321 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.350013 kubelet[3325]: E0912 17:43:09.349346 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.350013 kubelet[3325]: E0912 17:43:09.349716 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.350013 kubelet[3325]: W0912 17:43:09.349728 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.350013 kubelet[3325]: E0912 17:43:09.349746 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.350528 kubelet[3325]: E0912 17:43:09.350406 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.350528 kubelet[3325]: W0912 17:43:09.350419 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.350528 kubelet[3325]: E0912 17:43:09.350437 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.351166 kubelet[3325]: E0912 17:43:09.350840 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.351166 kubelet[3325]: W0912 17:43:09.350852 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.351166 kubelet[3325]: E0912 17:43:09.350867 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.352279 containerd[1909]: time="2025-09-12T17:43:09.352242180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q56bp,Uid:00e501c7-f98b-4d9b-8ff5-7ebffce42635,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:09.352566 kubelet[3325]: E0912 17:43:09.352088 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.352644 kubelet[3325]: W0912 17:43:09.352574 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.352644 kubelet[3325]: E0912 17:43:09.352595 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.352840 kubelet[3325]: E0912 17:43:09.352822 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.352913 kubelet[3325]: W0912 17:43:09.352840 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.352913 kubelet[3325]: E0912 17:43:09.352854 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.353550 kubelet[3325]: E0912 17:43:09.353530 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.353550 kubelet[3325]: W0912 17:43:09.353549 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.353693 kubelet[3325]: E0912 17:43:09.353564 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.353872 kubelet[3325]: E0912 17:43:09.353855 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.353940 kubelet[3325]: W0912 17:43:09.353872 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.353940 kubelet[3325]: E0912 17:43:09.353890 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.355065 kubelet[3325]: E0912 17:43:09.355047 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.355065 kubelet[3325]: W0912 17:43:09.355063 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.355202 kubelet[3325]: E0912 17:43:09.355078 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.355497 kubelet[3325]: E0912 17:43:09.355470 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.355497 kubelet[3325]: W0912 17:43:09.355487 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.355602 kubelet[3325]: E0912 17:43:09.355501 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.356083 kubelet[3325]: E0912 17:43:09.356064 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.356083 kubelet[3325]: W0912 17:43:09.356082 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.356205 kubelet[3325]: E0912 17:43:09.356097 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.357080 kubelet[3325]: E0912 17:43:09.357060 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.357080 kubelet[3325]: W0912 17:43:09.357078 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.357186 kubelet[3325]: E0912 17:43:09.357094 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.358003 kubelet[3325]: E0912 17:43:09.357986 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.358003 kubelet[3325]: W0912 17:43:09.358003 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.358147 kubelet[3325]: E0912 17:43:09.358029 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.358547 kubelet[3325]: E0912 17:43:09.358528 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.358547 kubelet[3325]: W0912 17:43:09.358547 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.358686 kubelet[3325]: E0912 17:43:09.358562 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.358954 kubelet[3325]: E0912 17:43:09.358902 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.359013 kubelet[3325]: W0912 17:43:09.358983 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.359013 kubelet[3325]: E0912 17:43:09.359002 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.361098 kubelet[3325]: E0912 17:43:09.361061 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.361098 kubelet[3325]: W0912 17:43:09.361081 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.361098 kubelet[3325]: E0912 17:43:09.361102 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.361384 kubelet[3325]: E0912 17:43:09.361361 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.361384 kubelet[3325]: W0912 17:43:09.361374 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.361713 kubelet[3325]: E0912 17:43:09.361388 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.361713 kubelet[3325]: E0912 17:43:09.361599 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.361713 kubelet[3325]: W0912 17:43:09.361610 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.361713 kubelet[3325]: E0912 17:43:09.361624 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.362504 kubelet[3325]: E0912 17:43:09.362187 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.362504 kubelet[3325]: W0912 17:43:09.362200 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.362504 kubelet[3325]: E0912 17:43:09.362221 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.362903 kubelet[3325]: E0912 17:43:09.362736 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.362903 kubelet[3325]: W0912 17:43:09.362749 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.362903 kubelet[3325]: E0912 17:43:09.362765 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.364034 kubelet[3325]: E0912 17:43:09.364006 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.364034 kubelet[3325]: W0912 17:43:09.364033 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.364158 kubelet[3325]: E0912 17:43:09.364050 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.364158 kubelet[3325]: I0912 17:43:09.364086 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5151dfa-6356-4409-97e2-1942a154fe88-varrun\") pod \"csi-node-driver-ghf2w\" (UID: \"a5151dfa-6356-4409-97e2-1942a154fe88\") " pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:09.365101 kubelet[3325]: E0912 17:43:09.365075 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.365101 kubelet[3325]: W0912 17:43:09.365096 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.367602 kubelet[3325]: E0912 17:43:09.365118 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.367602 kubelet[3325]: I0912 17:43:09.365145 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5151dfa-6356-4409-97e2-1942a154fe88-kubelet-dir\") pod \"csi-node-driver-ghf2w\" (UID: \"a5151dfa-6356-4409-97e2-1942a154fe88\") " pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:09.367602 kubelet[3325]: E0912 17:43:09.365429 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.367602 kubelet[3325]: W0912 17:43:09.365468 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.367602 kubelet[3325]: E0912 17:43:09.365491 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.367602 kubelet[3325]: E0912 17:43:09.365750 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.367602 kubelet[3325]: W0912 17:43:09.365781 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.367602 kubelet[3325]: E0912 17:43:09.365804 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.367602 kubelet[3325]: E0912 17:43:09.366103 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.368293 kubelet[3325]: W0912 17:43:09.366114 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.368293 kubelet[3325]: E0912 17:43:09.366156 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.368293 kubelet[3325]: I0912 17:43:09.366194 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5151dfa-6356-4409-97e2-1942a154fe88-registration-dir\") pod \"csi-node-driver-ghf2w\" (UID: \"a5151dfa-6356-4409-97e2-1942a154fe88\") " pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:09.368293 kubelet[3325]: E0912 17:43:09.366542 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.368293 kubelet[3325]: W0912 17:43:09.366553 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.368293 kubelet[3325]: E0912 17:43:09.366571 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.368293 kubelet[3325]: E0912 17:43:09.366814 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.368293 kubelet[3325]: W0912 17:43:09.366826 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.368293 kubelet[3325]: E0912 17:43:09.366845 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.368628 kubelet[3325]: E0912 17:43:09.367594 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.368628 kubelet[3325]: W0912 17:43:09.367606 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.368628 kubelet[3325]: E0912 17:43:09.367626 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.368628 kubelet[3325]: I0912 17:43:09.367656 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mnq\" (UniqueName: \"kubernetes.io/projected/a5151dfa-6356-4409-97e2-1942a154fe88-kube-api-access-n9mnq\") pod \"csi-node-driver-ghf2w\" (UID: \"a5151dfa-6356-4409-97e2-1942a154fe88\") " pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:09.369111 kubelet[3325]: E0912 17:43:09.368879 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.369111 kubelet[3325]: W0912 17:43:09.369063 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.369290 kubelet[3325]: E0912 17:43:09.369155 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.369290 kubelet[3325]: I0912 17:43:09.369190 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5151dfa-6356-4409-97e2-1942a154fe88-socket-dir\") pod \"csi-node-driver-ghf2w\" (UID: \"a5151dfa-6356-4409-97e2-1942a154fe88\") " pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:09.369665 kubelet[3325]: E0912 17:43:09.369471 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.369665 kubelet[3325]: W0912 17:43:09.369483 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.369665 kubelet[3325]: E0912 17:43:09.369532 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.370729 kubelet[3325]: E0912 17:43:09.370401 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.370729 kubelet[3325]: W0912 17:43:09.370415 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.370729 kubelet[3325]: E0912 17:43:09.370436 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.371104 kubelet[3325]: E0912 17:43:09.370803 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.371104 kubelet[3325]: W0912 17:43:09.370815 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.371104 kubelet[3325]: E0912 17:43:09.370967 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.371796 kubelet[3325]: E0912 17:43:09.371750 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.371796 kubelet[3325]: W0912 17:43:09.371770 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.371796 kubelet[3325]: E0912 17:43:09.371785 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.373412 kubelet[3325]: E0912 17:43:09.372984 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.373412 kubelet[3325]: W0912 17:43:09.373000 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.373412 kubelet[3325]: E0912 17:43:09.373016 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.373412 kubelet[3325]: E0912 17:43:09.373254 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.373412 kubelet[3325]: W0912 17:43:09.373264 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.373412 kubelet[3325]: E0912 17:43:09.373276 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.406177 containerd[1909]: time="2025-09-12T17:43:09.406113519Z" level=info msg="connecting to shim e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50" address="unix:///run/containerd/s/47e9e46045315b5fde24dc8cb4d9d6bc08f884221597235acba4dad62eac5b64" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:09.471247 kubelet[3325]: E0912 17:43:09.471202 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.471247 kubelet[3325]: W0912 17:43:09.471234 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.471247 kubelet[3325]: E0912 17:43:09.471260 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.472104 kubelet[3325]: E0912 17:43:09.472078 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.472104 kubelet[3325]: W0912 17:43:09.472102 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.473419 kubelet[3325]: E0912 17:43:09.473224 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.473419 kubelet[3325]: W0912 17:43:09.473239 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.473419 kubelet[3325]: E0912 17:43:09.473258 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.474614 kubelet[3325]: E0912 17:43:09.472662 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.474614 kubelet[3325]: E0912 17:43:09.474337 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.474614 kubelet[3325]: W0912 17:43:09.474351 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.474614 kubelet[3325]: E0912 17:43:09.474403 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.476077 kubelet[3325]: E0912 17:43:09.476058 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.476077 kubelet[3325]: W0912 17:43:09.476077 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.476077 kubelet[3325]: E0912 17:43:09.476164 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.476599 kubelet[3325]: E0912 17:43:09.476537 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.476599 kubelet[3325]: W0912 17:43:09.476550 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.477066 kubelet[3325]: E0912 17:43:09.477031 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.477275 kubelet[3325]: E0912 17:43:09.477259 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.477275 kubelet[3325]: W0912 17:43:09.477276 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.477908 kubelet[3325]: E0912 17:43:09.477888 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.480083 kubelet[3325]: E0912 17:43:09.480017 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.480083 kubelet[3325]: W0912 17:43:09.480040 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.480379 kubelet[3325]: E0912 17:43:09.480357 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.482608 kubelet[3325]: E0912 17:43:09.481980 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.482608 kubelet[3325]: W0912 17:43:09.482000 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.482608 kubelet[3325]: E0912 17:43:09.482088 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.483254 kubelet[3325]: E0912 17:43:09.482963 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.483254 kubelet[3325]: W0912 17:43:09.482977 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.483254 kubelet[3325]: E0912 17:43:09.483148 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.483254 kubelet[3325]: E0912 17:43:09.483218 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.483254 kubelet[3325]: W0912 17:43:09.483227 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.483522 kubelet[3325]: E0912 17:43:09.483308 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.483522 kubelet[3325]: E0912 17:43:09.483470 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.483522 kubelet[3325]: W0912 17:43:09.483479 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.484016 kubelet[3325]: E0912 17:43:09.483993 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.484399 kubelet[3325]: E0912 17:43:09.484324 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.484399 kubelet[3325]: W0912 17:43:09.484341 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.488053 kubelet[3325]: E0912 17:43:09.484975 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.488053 kubelet[3325]: E0912 17:43:09.485248 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.488053 kubelet[3325]: W0912 17:43:09.485261 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.488053 kubelet[3325]: E0912 17:43:09.485501 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.488053 kubelet[3325]: E0912 17:43:09.486600 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.488053 kubelet[3325]: W0912 17:43:09.486612 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.488053 kubelet[3325]: E0912 17:43:09.488036 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.490321 kubelet[3325]: E0912 17:43:09.488644 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.490321 kubelet[3325]: W0912 17:43:09.488657 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.490321 kubelet[3325]: E0912 17:43:09.489103 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.490321 kubelet[3325]: E0912 17:43:09.489494 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.490321 kubelet[3325]: W0912 17:43:09.489507 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.490321 kubelet[3325]: E0912 17:43:09.490151 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.491577 kubelet[3325]: E0912 17:43:09.490580 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.491577 kubelet[3325]: W0912 17:43:09.490595 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.491577 kubelet[3325]: E0912 17:43:09.491015 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.495423 kubelet[3325]: E0912 17:43:09.493557 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.495423 kubelet[3325]: W0912 17:43:09.493577 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.495423 kubelet[3325]: E0912 17:43:09.493974 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.495423 kubelet[3325]: E0912 17:43:09.494479 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.495423 kubelet[3325]: W0912 17:43:09.494494 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.495423 kubelet[3325]: E0912 17:43:09.494793 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.497579 kubelet[3325]: E0912 17:43:09.495516 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.497579 kubelet[3325]: W0912 17:43:09.495532 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.497579 kubelet[3325]: E0912 17:43:09.495656 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.497579 kubelet[3325]: E0912 17:43:09.496653 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.497579 kubelet[3325]: W0912 17:43:09.496667 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.498601 kubelet[3325]: E0912 17:43:09.498220 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.498601 kubelet[3325]: W0912 17:43:09.498239 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.499561 kubelet[3325]: E0912 17:43:09.498804 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.499561 kubelet[3325]: W0912 17:43:09.498818 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.499561 kubelet[3325]: E0912 17:43:09.498839 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.499561 kubelet[3325]: E0912 17:43:09.499311 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.499561 kubelet[3325]: E0912 17:43:09.499346 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.501455 kubelet[3325]: E0912 17:43:09.500741 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.501455 kubelet[3325]: W0912 17:43:09.500755 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.501455 kubelet[3325]: E0912 17:43:09.500792 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.502204 systemd[1]: Started cri-containerd-e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50.scope - libcontainer container e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50. Sep 12 17:43:09.507155 containerd[1909]: time="2025-09-12T17:43:09.507106147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86979bc468-n5whl,Uid:d29a4ed9-26bc-4297-a889-6720fb55981d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8237080e0783d1da9de93026e08c4f42d6647f355e15cddb3eb7f3a2085bf86b\"" Sep 12 17:43:09.515184 containerd[1909]: time="2025-09-12T17:43:09.515147231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:43:09.562748 kubelet[3325]: E0912 17:43:09.562545 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:09.562748 kubelet[3325]: W0912 17:43:09.562577 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:09.562748 kubelet[3325]: E0912 17:43:09.562603 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:09.741613 containerd[1909]: time="2025-09-12T17:43:09.741570882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q56bp,Uid:00e501c7-f98b-4d9b-8ff5-7ebffce42635,Namespace:calico-system,Attempt:0,} returns sandbox id \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\"" Sep 12 17:43:11.244743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691954828.mount: Deactivated successfully. Sep 12 17:43:11.323960 kubelet[3325]: E0912 17:43:11.323895 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:12.640446 containerd[1909]: time="2025-09-12T17:43:12.640224026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:12.654432 containerd[1909]: time="2025-09-12T17:43:12.654376224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:43:12.658803 containerd[1909]: time="2025-09-12T17:43:12.658695954Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:12.663900 containerd[1909]: time="2025-09-12T17:43:12.663782945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:12.665068 containerd[1909]: time="2025-09-12T17:43:12.664423605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.149230232s" Sep 12 17:43:12.665068 containerd[1909]: time="2025-09-12T17:43:12.664469903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:43:12.667346 containerd[1909]: time="2025-09-12T17:43:12.667306473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:43:12.690581 containerd[1909]: time="2025-09-12T17:43:12.690535995Z" level=info msg="CreateContainer within sandbox \"8237080e0783d1da9de93026e08c4f42d6647f355e15cddb3eb7f3a2085bf86b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:43:12.711949 containerd[1909]: time="2025-09-12T17:43:12.711825400Z" level=info msg="Container 22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:12.728719 containerd[1909]: time="2025-09-12T17:43:12.728564829Z" level=info msg="CreateContainer within sandbox \"8237080e0783d1da9de93026e08c4f42d6647f355e15cddb3eb7f3a2085bf86b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8\"" Sep 12 17:43:12.730149 containerd[1909]: time="2025-09-12T17:43:12.730104756Z" level=info msg="StartContainer for \"22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8\"" Sep 12 17:43:12.732213 containerd[1909]: time="2025-09-12T17:43:12.732181775Z" level=info msg="connecting to shim 22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8" address="unix:///run/containerd/s/b34de46c872f369059eb2c193e55ada3f8792bd7fe49494c0a414e9af7c8880f" protocol=ttrpc version=3 Sep 12 17:43:12.754221 systemd[1]: Started cri-containerd-22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8.scope - libcontainer container 22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8. Sep 12 17:43:12.827944 containerd[1909]: time="2025-09-12T17:43:12.827528573Z" level=info msg="StartContainer for \"22979a38b80eb8debdd378a76f4b043205d51e90577a5bd2f1ba1b5f9d92c4e8\" returns successfully" Sep 12 17:43:13.324139 kubelet[3325]: E0912 17:43:13.323897 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:13.506945 kubelet[3325]: E0912 17:43:13.506857 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.506945 kubelet[3325]: W0912 17:43:13.506887 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.507791 kubelet[3325]: E0912 17:43:13.506912 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.508421 kubelet[3325]: E0912 17:43:13.508074 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.509837 kubelet[3325]: W0912 17:43:13.508676 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.509837 kubelet[3325]: E0912 17:43:13.508706 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.510841 kubelet[3325]: E0912 17:43:13.510821 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.511126 kubelet[3325]: W0912 17:43:13.511079 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.511270 kubelet[3325]: E0912 17:43:13.511130 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.511425 kubelet[3325]: E0912 17:43:13.511413 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.511473 kubelet[3325]: W0912 17:43:13.511428 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.511473 kubelet[3325]: E0912 17:43:13.511445 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.511707 kubelet[3325]: E0912 17:43:13.511689 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.511781 kubelet[3325]: W0912 17:43:13.511713 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.511781 kubelet[3325]: E0912 17:43:13.511729 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.511960 kubelet[3325]: E0912 17:43:13.511915 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.511960 kubelet[3325]: W0912 17:43:13.511955 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.512193 kubelet[3325]: E0912 17:43:13.511968 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.512290 kubelet[3325]: E0912 17:43:13.512277 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.512370 kubelet[3325]: W0912 17:43:13.512342 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.512370 kubelet[3325]: E0912 17:43:13.512361 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.512587 kubelet[3325]: E0912 17:43:13.512570 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.512587 kubelet[3325]: W0912 17:43:13.512584 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.512727 kubelet[3325]: E0912 17:43:13.512597 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.513254 kubelet[3325]: E0912 17:43:13.512883 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.513478 kubelet[3325]: W0912 17:43:13.513451 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.513584 kubelet[3325]: E0912 17:43:13.513479 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.513749 kubelet[3325]: E0912 17:43:13.513675 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.513749 kubelet[3325]: W0912 17:43:13.513690 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.514003 kubelet[3325]: E0912 17:43:13.513751 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.514003 kubelet[3325]: E0912 17:43:13.513959 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.514003 kubelet[3325]: W0912 17:43:13.513970 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.514003 kubelet[3325]: E0912 17:43:13.513983 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.514258 kubelet[3325]: E0912 17:43:13.514170 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.514258 kubelet[3325]: W0912 17:43:13.514179 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.514258 kubelet[3325]: E0912 17:43:13.514194 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.514463 kubelet[3325]: E0912 17:43:13.514446 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.514463 kubelet[3325]: W0912 17:43:13.514460 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.514573 kubelet[3325]: E0912 17:43:13.514473 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.514735 kubelet[3325]: E0912 17:43:13.514710 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.514735 kubelet[3325]: W0912 17:43:13.514725 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.514848 kubelet[3325]: E0912 17:43:13.514739 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.514993 kubelet[3325]: E0912 17:43:13.514967 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.514993 kubelet[3325]: W0912 17:43:13.514981 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.515091 kubelet[3325]: E0912 17:43:13.514993 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.519811 kubelet[3325]: E0912 17:43:13.519777 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.519811 kubelet[3325]: W0912 17:43:13.519803 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.520149 kubelet[3325]: E0912 17:43:13.519826 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.520234 kubelet[3325]: E0912 17:43:13.520149 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.520234 kubelet[3325]: W0912 17:43:13.520162 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.520234 kubelet[3325]: E0912 17:43:13.520188 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.520686 kubelet[3325]: E0912 17:43:13.520537 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.520686 kubelet[3325]: W0912 17:43:13.520551 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.520686 kubelet[3325]: E0912 17:43:13.520573 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.521348 kubelet[3325]: E0912 17:43:13.521330 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.521876 kubelet[3325]: W0912 17:43:13.521502 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.521876 kubelet[3325]: E0912 17:43:13.521619 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.522050 kubelet[3325]: E0912 17:43:13.522029 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.522106 kubelet[3325]: W0912 17:43:13.522048 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.522106 kubelet[3325]: E0912 17:43:13.522073 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.522330 kubelet[3325]: E0912 17:43:13.522312 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.522330 kubelet[3325]: W0912 17:43:13.522328 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.522438 kubelet[3325]: E0912 17:43:13.522411 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.522601 kubelet[3325]: E0912 17:43:13.522582 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.522668 kubelet[3325]: W0912 17:43:13.522602 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.522717 kubelet[3325]: E0912 17:43:13.522689 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.522899 kubelet[3325]: E0912 17:43:13.522870 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.523096 kubelet[3325]: W0912 17:43:13.522902 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.523096 kubelet[3325]: E0912 17:43:13.523059 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.523302 kubelet[3325]: E0912 17:43:13.523283 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.523302 kubelet[3325]: W0912 17:43:13.523299 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.523399 kubelet[3325]: E0912 17:43:13.523320 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.523602 kubelet[3325]: E0912 17:43:13.523583 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.523602 kubelet[3325]: W0912 17:43:13.523598 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.523714 kubelet[3325]: E0912 17:43:13.523684 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.524052 kubelet[3325]: E0912 17:43:13.524033 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.524052 kubelet[3325]: W0912 17:43:13.524048 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.524168 kubelet[3325]: E0912 17:43:13.524150 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.524343 kubelet[3325]: E0912 17:43:13.524312 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.524343 kubelet[3325]: W0912 17:43:13.524326 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.524456 kubelet[3325]: E0912 17:43:13.524409 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.524582 kubelet[3325]: E0912 17:43:13.524561 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.524582 kubelet[3325]: W0912 17:43:13.524577 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.524677 kubelet[3325]: E0912 17:43:13.524596 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.524854 kubelet[3325]: E0912 17:43:13.524836 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.525148 kubelet[3325]: W0912 17:43:13.524864 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.525215 kubelet[3325]: E0912 17:43:13.524888 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.525340 kubelet[3325]: E0912 17:43:13.525326 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.525558 kubelet[3325]: W0912 17:43:13.525412 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.525558 kubelet[3325]: E0912 17:43:13.525431 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.525969 kubelet[3325]: E0912 17:43:13.525910 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.525969 kubelet[3325]: W0912 17:43:13.525965 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.526108 kubelet[3325]: E0912 17:43:13.525997 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.526275 kubelet[3325]: E0912 17:43:13.526253 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.526275 kubelet[3325]: W0912 17:43:13.526268 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.526379 kubelet[3325]: E0912 17:43:13.526282 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:13.526665 kubelet[3325]: E0912 17:43:13.526647 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:43:13.526665 kubelet[3325]: W0912 17:43:13.526662 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:43:13.526758 kubelet[3325]: E0912 17:43:13.526676 3325 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:43:14.041432 containerd[1909]: time="2025-09-12T17:43:14.041378170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:14.043135 containerd[1909]: time="2025-09-12T17:43:14.043074259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:43:14.045494 containerd[1909]: time="2025-09-12T17:43:14.045423705Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:14.049158 containerd[1909]: time="2025-09-12T17:43:14.049115135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:14.049818 containerd[1909]: time="2025-09-12T17:43:14.049787212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.382434635s" Sep 12 17:43:14.049954 containerd[1909]: time="2025-09-12T17:43:14.049936968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:43:14.053342 containerd[1909]: time="2025-09-12T17:43:14.053305393Z" level=info msg="CreateContainer within sandbox \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:43:14.107561 containerd[1909]: time="2025-09-12T17:43:14.107359656Z" level=info msg="Container a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:14.130451 containerd[1909]: time="2025-09-12T17:43:14.130403181Z" level=info msg="CreateContainer within sandbox \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\"" Sep 12 17:43:14.131194 containerd[1909]: time="2025-09-12T17:43:14.131158095Z" level=info msg="StartContainer for \"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\"" Sep 12 17:43:14.133292 containerd[1909]: time="2025-09-12T17:43:14.133238448Z" level=info msg="connecting to shim a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50" address="unix:///run/containerd/s/47e9e46045315b5fde24dc8cb4d9d6bc08f884221597235acba4dad62eac5b64" protocol=ttrpc version=3 Sep 12 17:43:14.173470 systemd[1]: Started cri-containerd-a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50.scope - libcontainer container a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50. Sep 12 17:43:14.232281 containerd[1909]: time="2025-09-12T17:43:14.232223138Z" level=info msg="StartContainer for \"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\" returns successfully" Sep 12 17:43:14.239862 systemd[1]: cri-containerd-a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50.scope: Deactivated successfully. Sep 12 17:43:14.241076 systemd[1]: cri-containerd-a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50.scope: Consumed 35ms CPU time, 6M memory peak, 4.6M written to disk. Sep 12 17:43:14.263797 containerd[1909]: time="2025-09-12T17:43:14.263705702Z" level=info msg="received exit event container_id:\"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\" id:\"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\" pid:4267 exited_at:{seconds:1757698994 nanos:242805085}" Sep 12 17:43:14.283956 containerd[1909]: time="2025-09-12T17:43:14.283528438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\" id:\"a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50\" pid:4267 exited_at:{seconds:1757698994 nanos:242805085}" Sep 12 17:43:14.307098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a96cfc2d6ab050f05978b6c54ba35eaadf47c34f395540d2ff8596b986e12c50-rootfs.mount: Deactivated successfully. Sep 12 17:43:14.491598 kubelet[3325]: I0912 17:43:14.491567 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:14.494181 containerd[1909]: time="2025-09-12T17:43:14.494133169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:43:14.515396 kubelet[3325]: I0912 17:43:14.515329 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86979bc468-n5whl" podStartSLOduration=3.362440005 podStartE2EDuration="6.515305875s" podCreationTimestamp="2025-09-12 17:43:08 +0000 UTC" firstStartedPulling="2025-09-12 17:43:09.513955929 +0000 UTC m=+21.352148240" lastFinishedPulling="2025-09-12 17:43:12.666821803 +0000 UTC m=+24.505014110" observedRunningTime="2025-09-12 17:43:13.510260085 +0000 UTC m=+25.348452403" watchObservedRunningTime="2025-09-12 17:43:14.515305875 +0000 UTC m=+26.353498195" Sep 12 17:43:15.323832 kubelet[3325]: E0912 17:43:15.323762 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:17.341967 kubelet[3325]: E0912 17:43:17.341535 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:18.270908 kubelet[3325]: I0912 17:43:18.270777 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:19.158347 containerd[1909]: time="2025-09-12T17:43:19.158294158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:19.160269 containerd[1909]: time="2025-09-12T17:43:19.160220700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:43:19.163954 containerd[1909]: time="2025-09-12T17:43:19.162881197Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:19.167163 containerd[1909]: time="2025-09-12T17:43:19.167120262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:19.167528 containerd[1909]: time="2025-09-12T17:43:19.167495916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.673319816s" Sep 12 17:43:19.167606 containerd[1909]: time="2025-09-12T17:43:19.167531972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:43:19.185824 containerd[1909]: time="2025-09-12T17:43:19.185764706Z" level=info msg="CreateContainer within sandbox \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:43:19.206017 containerd[1909]: time="2025-09-12T17:43:19.204608705Z" level=info msg="Container f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:19.217210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658251908.mount: Deactivated successfully. Sep 12 17:43:19.233528 containerd[1909]: time="2025-09-12T17:43:19.233472974Z" level=info msg="CreateContainer within sandbox \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\"" Sep 12 17:43:19.234980 containerd[1909]: time="2025-09-12T17:43:19.234247488Z" level=info msg="StartContainer for \"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\"" Sep 12 17:43:19.236395 containerd[1909]: time="2025-09-12T17:43:19.236348953Z" level=info msg="connecting to shim f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31" address="unix:///run/containerd/s/47e9e46045315b5fde24dc8cb4d9d6bc08f884221597235acba4dad62eac5b64" protocol=ttrpc version=3 Sep 12 17:43:19.270470 systemd[1]: Started cri-containerd-f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31.scope - libcontainer container f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31. Sep 12 17:43:19.333692 containerd[1909]: time="2025-09-12T17:43:19.333563964Z" level=info msg="StartContainer for \"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\" returns successfully" Sep 12 17:43:19.345263 kubelet[3325]: E0912 17:43:19.345213 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:20.397575 systemd[1]: cri-containerd-f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31.scope: Deactivated successfully. Sep 12 17:43:20.397949 systemd[1]: cri-containerd-f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31.scope: Consumed 592ms CPU time, 175.6M memory peak, 6.1M read from disk, 171.3M written to disk. Sep 12 17:43:20.495107 kubelet[3325]: I0912 17:43:20.495060 3325 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:43:20.513804 containerd[1909]: time="2025-09-12T17:43:20.513744654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\" id:\"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\" pid:4327 exited_at:{seconds:1757699000 nanos:505703742}" Sep 12 17:43:20.514575 containerd[1909]: time="2025-09-12T17:43:20.514386534Z" level=info msg="received exit event container_id:\"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\" id:\"f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31\" pid:4327 exited_at:{seconds:1757699000 nanos:505703742}" Sep 12 17:43:20.564506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2afd7860068723230c2f86cbe882e9b40258c87e1a22379435776bfa5ecfe31-rootfs.mount: Deactivated successfully. Sep 12 17:43:20.603241 systemd[1]: Created slice kubepods-burstable-podfa257bab_3021_4bf2_9b9d_56f7cf83a781.slice - libcontainer container kubepods-burstable-podfa257bab_3021_4bf2_9b9d_56f7cf83a781.slice. Sep 12 17:43:20.619967 systemd[1]: Created slice kubepods-besteffort-pod14280669_80b2_47e3_8948_1a09fa3ad5a8.slice - libcontainer container kubepods-besteffort-pod14280669_80b2_47e3_8948_1a09fa3ad5a8.slice. Sep 12 17:43:20.633867 systemd[1]: Created slice kubepods-besteffort-podc4afc88f_9627_4122_bbaf_e9f08ff5e1e2.slice - libcontainer container kubepods-besteffort-podc4afc88f_9627_4122_bbaf_e9f08ff5e1e2.slice. Sep 12 17:43:20.646475 systemd[1]: Created slice kubepods-burstable-pod97abfd70_fe53_4bd7_8261_3ab34278012c.slice - libcontainer container kubepods-burstable-pod97abfd70_fe53_4bd7_8261_3ab34278012c.slice. Sep 12 17:43:20.660844 systemd[1]: Created slice kubepods-besteffort-pod7496053f_e228_4c34_a1aa_75f4b5ba00bb.slice - libcontainer container kubepods-besteffort-pod7496053f_e228_4c34_a1aa_75f4b5ba00bb.slice. Sep 12 17:43:20.672904 systemd[1]: Created slice kubepods-besteffort-pod1edbce95_ac87_4b0e_b3d0_642f106d2276.slice - libcontainer container kubepods-besteffort-pod1edbce95_ac87_4b0e_b3d0_642f106d2276.slice. Sep 12 17:43:20.682345 systemd[1]: Created slice kubepods-besteffort-podb6b62184_04f0_478f_9588_1262565a1301.slice - libcontainer container kubepods-besteffort-podb6b62184_04f0_478f_9588_1262565a1301.slice. Sep 12 17:43:20.684692 kubelet[3325]: I0912 17:43:20.683733 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa257bab-3021-4bf2-9b9d-56f7cf83a781-config-volume\") pod \"coredns-668d6bf9bc-88rqr\" (UID: \"fa257bab-3021-4bf2-9b9d-56f7cf83a781\") " pod="kube-system/coredns-668d6bf9bc-88rqr" Sep 12 17:43:20.684692 kubelet[3325]: I0912 17:43:20.683795 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmsg\" (UniqueName: \"kubernetes.io/projected/c4afc88f-9627-4122-bbaf-e9f08ff5e1e2-kube-api-access-mwmsg\") pod \"calico-kube-controllers-55b4977774-rqh99\" (UID: \"c4afc88f-9627-4122-bbaf-e9f08ff5e1e2\") " pod="calico-system/calico-kube-controllers-55b4977774-rqh99" Sep 12 17:43:20.684692 kubelet[3325]: I0912 17:43:20.683830 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97abfd70-fe53-4bd7-8261-3ab34278012c-config-volume\") pod \"coredns-668d6bf9bc-lq2tr\" (UID: \"97abfd70-fe53-4bd7-8261-3ab34278012c\") " pod="kube-system/coredns-668d6bf9bc-lq2tr" Sep 12 17:43:20.684692 kubelet[3325]: I0912 17:43:20.683873 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5dc\" (UniqueName: \"kubernetes.io/projected/fa257bab-3021-4bf2-9b9d-56f7cf83a781-kube-api-access-rn5dc\") pod \"coredns-668d6bf9bc-88rqr\" (UID: \"fa257bab-3021-4bf2-9b9d-56f7cf83a781\") " pod="kube-system/coredns-668d6bf9bc-88rqr" Sep 12 17:43:20.684692 kubelet[3325]: I0912 17:43:20.683907 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14280669-80b2-47e3-8948-1a09fa3ad5a8-calico-apiserver-certs\") pod \"calico-apiserver-77fc447859-l6lrq\" (UID: \"14280669-80b2-47e3-8948-1a09fa3ad5a8\") " pod="calico-apiserver/calico-apiserver-77fc447859-l6lrq" Sep 12 17:43:20.685062 kubelet[3325]: I0912 17:43:20.683950 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7b7h\" (UniqueName: \"kubernetes.io/projected/14280669-80b2-47e3-8948-1a09fa3ad5a8-kube-api-access-d7b7h\") pod \"calico-apiserver-77fc447859-l6lrq\" (UID: \"14280669-80b2-47e3-8948-1a09fa3ad5a8\") " pod="calico-apiserver/calico-apiserver-77fc447859-l6lrq" Sep 12 17:43:20.685062 kubelet[3325]: I0912 17:43:20.683977 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4afc88f-9627-4122-bbaf-e9f08ff5e1e2-tigera-ca-bundle\") pod \"calico-kube-controllers-55b4977774-rqh99\" (UID: \"c4afc88f-9627-4122-bbaf-e9f08ff5e1e2\") " pod="calico-system/calico-kube-controllers-55b4977774-rqh99" Sep 12 17:43:20.685062 kubelet[3325]: I0912 17:43:20.684026 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbbd\" (UniqueName: \"kubernetes.io/projected/97abfd70-fe53-4bd7-8261-3ab34278012c-kube-api-access-ppbbd\") pod \"coredns-668d6bf9bc-lq2tr\" (UID: \"97abfd70-fe53-4bd7-8261-3ab34278012c\") " pod="kube-system/coredns-668d6bf9bc-lq2tr" Sep 12 17:43:20.695898 systemd[1]: Created slice kubepods-besteffort-podc4bbfb4c_8cf3_4210_98c5_53617abee5f9.slice - libcontainer container kubepods-besteffort-podc4bbfb4c_8cf3_4210_98c5_53617abee5f9.slice. Sep 12 17:43:20.784490 kubelet[3325]: I0912 17:43:20.784439 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1edbce95-ac87-4b0e-b3d0-642f106d2276-calico-apiserver-certs\") pod \"calico-apiserver-77fc447859-2slhn\" (UID: \"1edbce95-ac87-4b0e-b3d0-642f106d2276\") " pod="calico-apiserver/calico-apiserver-77fc447859-2slhn" Sep 12 17:43:20.784657 kubelet[3325]: I0912 17:43:20.784504 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdzk\" (UniqueName: \"kubernetes.io/projected/7496053f-e228-4c34-a1aa-75f4b5ba00bb-kube-api-access-zkdzk\") pod \"whisker-794bcf8769-xdsh6\" (UID: \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\") " pod="calico-system/whisker-794bcf8769-xdsh6" Sep 12 17:43:20.784657 kubelet[3325]: I0912 17:43:20.784563 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97lm9\" (UniqueName: \"kubernetes.io/projected/c4bbfb4c-8cf3-4210-98c5-53617abee5f9-kube-api-access-97lm9\") pod \"calico-apiserver-6bffdd5bf7-slkqb\" (UID: \"c4bbfb4c-8cf3-4210-98c5-53617abee5f9\") " pod="calico-apiserver/calico-apiserver-6bffdd5bf7-slkqb" Sep 12 17:43:20.784657 kubelet[3325]: I0912 17:43:20.784586 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6b62184-04f0-478f-9588-1262565a1301-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-mp5hl\" (UID: \"b6b62184-04f0-478f-9588-1262565a1301\") " pod="calico-system/goldmane-54d579b49d-mp5hl" Sep 12 17:43:20.784657 kubelet[3325]: I0912 17:43:20.784625 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-backend-key-pair\") pod \"whisker-794bcf8769-xdsh6\" (UID: \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\") " pod="calico-system/whisker-794bcf8769-xdsh6" Sep 12 17:43:20.784850 kubelet[3325]: I0912 17:43:20.784667 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b6b62184-04f0-478f-9588-1262565a1301-goldmane-key-pair\") pod \"goldmane-54d579b49d-mp5hl\" (UID: \"b6b62184-04f0-478f-9588-1262565a1301\") " pod="calico-system/goldmane-54d579b49d-mp5hl" Sep 12 17:43:20.784850 kubelet[3325]: I0912 17:43:20.784700 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6h9\" (UniqueName: \"kubernetes.io/projected/1edbce95-ac87-4b0e-b3d0-642f106d2276-kube-api-access-wc6h9\") pod \"calico-apiserver-77fc447859-2slhn\" (UID: \"1edbce95-ac87-4b0e-b3d0-642f106d2276\") " pod="calico-apiserver/calico-apiserver-77fc447859-2slhn" Sep 12 17:43:20.784850 kubelet[3325]: I0912 17:43:20.784727 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c4bbfb4c-8cf3-4210-98c5-53617abee5f9-calico-apiserver-certs\") pod \"calico-apiserver-6bffdd5bf7-slkqb\" (UID: \"c4bbfb4c-8cf3-4210-98c5-53617abee5f9\") " pod="calico-apiserver/calico-apiserver-6bffdd5bf7-slkqb" Sep 12 17:43:20.784850 kubelet[3325]: I0912 17:43:20.784790 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-ca-bundle\") pod \"whisker-794bcf8769-xdsh6\" (UID: \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\") " pod="calico-system/whisker-794bcf8769-xdsh6" Sep 12 17:43:20.784850 kubelet[3325]: I0912 17:43:20.784818 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b62184-04f0-478f-9588-1262565a1301-config\") pod \"goldmane-54d579b49d-mp5hl\" (UID: \"b6b62184-04f0-478f-9588-1262565a1301\") " pod="calico-system/goldmane-54d579b49d-mp5hl" Sep 12 17:43:20.785183 kubelet[3325]: I0912 17:43:20.784868 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6hc\" (UniqueName: \"kubernetes.io/projected/b6b62184-04f0-478f-9588-1262565a1301-kube-api-access-zg6hc\") pod \"goldmane-54d579b49d-mp5hl\" (UID: \"b6b62184-04f0-478f-9588-1262565a1301\") " pod="calico-system/goldmane-54d579b49d-mp5hl" Sep 12 17:43:20.926276 containerd[1909]: time="2025-09-12T17:43:20.926158929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-88rqr,Uid:fa257bab-3021-4bf2-9b9d-56f7cf83a781,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:20.933146 containerd[1909]: time="2025-09-12T17:43:20.932670424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-l6lrq,Uid:14280669-80b2-47e3-8948-1a09fa3ad5a8,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:20.947988 containerd[1909]: time="2025-09-12T17:43:20.947059721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b4977774-rqh99,Uid:c4afc88f-9627-4122-bbaf-e9f08ff5e1e2,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:20.970369 containerd[1909]: time="2025-09-12T17:43:20.970307510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794bcf8769-xdsh6,Uid:7496053f-e228-4c34-a1aa-75f4b5ba00bb,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:20.970815 containerd[1909]: time="2025-09-12T17:43:20.970326620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lq2tr,Uid:97abfd70-fe53-4bd7-8261-3ab34278012c,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:20.980681 containerd[1909]: time="2025-09-12T17:43:20.980517363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-2slhn,Uid:1edbce95-ac87-4b0e-b3d0-642f106d2276,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:21.007866 containerd[1909]: time="2025-09-12T17:43:21.007833605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffdd5bf7-slkqb,Uid:c4bbfb4c-8cf3-4210-98c5-53617abee5f9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:21.008325 containerd[1909]: time="2025-09-12T17:43:21.008140387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-mp5hl,Uid:b6b62184-04f0-478f-9588-1262565a1301,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:21.334456 systemd[1]: Created slice kubepods-besteffort-poda5151dfa_6356_4409_97e2_1942a154fe88.slice - libcontainer container kubepods-besteffort-poda5151dfa_6356_4409_97e2_1942a154fe88.slice. Sep 12 17:43:21.340234 containerd[1909]: time="2025-09-12T17:43:21.340192224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghf2w,Uid:a5151dfa-6356-4409-97e2-1942a154fe88,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:21.372491 containerd[1909]: time="2025-09-12T17:43:21.372432183Z" level=error msg="Failed to destroy network for sandbox \"54a7aac4c85ccd2845abca906584db5faf72335775bde0c9b8c799686557aa36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.373605 containerd[1909]: time="2025-09-12T17:43:21.373528333Z" level=error msg="Failed to destroy network for sandbox \"5522ef1e0581eff936932e5d4ac94cdd299cf3008577893a2a3a43a569cf859b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.375054 containerd[1909]: time="2025-09-12T17:43:21.375006166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-l6lrq,Uid:14280669-80b2-47e3-8948-1a09fa3ad5a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a7aac4c85ccd2845abca906584db5faf72335775bde0c9b8c799686557aa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.375946 kubelet[3325]: E0912 17:43:21.375628 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a7aac4c85ccd2845abca906584db5faf72335775bde0c9b8c799686557aa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.375946 kubelet[3325]: E0912 17:43:21.375719 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a7aac4c85ccd2845abca906584db5faf72335775bde0c9b8c799686557aa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc447859-l6lrq" Sep 12 17:43:21.375946 kubelet[3325]: E0912 17:43:21.375756 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a7aac4c85ccd2845abca906584db5faf72335775bde0c9b8c799686557aa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc447859-l6lrq" Sep 12 17:43:21.376121 kubelet[3325]: E0912 17:43:21.375816 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77fc447859-l6lrq_calico-apiserver(14280669-80b2-47e3-8948-1a09fa3ad5a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77fc447859-l6lrq_calico-apiserver(14280669-80b2-47e3-8948-1a09fa3ad5a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54a7aac4c85ccd2845abca906584db5faf72335775bde0c9b8c799686557aa36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fc447859-l6lrq" podUID="14280669-80b2-47e3-8948-1a09fa3ad5a8" Sep 12 17:43:21.380606 containerd[1909]: time="2025-09-12T17:43:21.379003381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffdd5bf7-slkqb,Uid:c4bbfb4c-8cf3-4210-98c5-53617abee5f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5522ef1e0581eff936932e5d4ac94cdd299cf3008577893a2a3a43a569cf859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.423613 containerd[1909]: time="2025-09-12T17:43:21.423167153Z" level=error msg="Failed to destroy network for sandbox \"87bcbffd70814190aee0342ff5aa5bf9dbebdb9fc61ed8ef87cb14e6ea4ec1bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.427094 containerd[1909]: time="2025-09-12T17:43:21.427043070Z" level=error msg="Failed to destroy network for sandbox \"f039db8eda676391278668134a5a2671f584cc7b3778a56182a5d3435f798735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.427722 kubelet[3325]: E0912 17:43:21.427686 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5522ef1e0581eff936932e5d4ac94cdd299cf3008577893a2a3a43a569cf859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.428851 kubelet[3325]: E0912 17:43:21.428819 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5522ef1e0581eff936932e5d4ac94cdd299cf3008577893a2a3a43a569cf859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bffdd5bf7-slkqb" Sep 12 17:43:21.429425 containerd[1909]: time="2025-09-12T17:43:21.428834435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-2slhn,Uid:1edbce95-ac87-4b0e-b3d0-642f106d2276,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bcbffd70814190aee0342ff5aa5bf9dbebdb9fc61ed8ef87cb14e6ea4ec1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.430191 kubelet[3325]: E0912 17:43:21.430009 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bcbffd70814190aee0342ff5aa5bf9dbebdb9fc61ed8ef87cb14e6ea4ec1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.430191 kubelet[3325]: E0912 17:43:21.430074 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bcbffd70814190aee0342ff5aa5bf9dbebdb9fc61ed8ef87cb14e6ea4ec1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc447859-2slhn" Sep 12 17:43:21.430191 kubelet[3325]: E0912 17:43:21.429592 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5522ef1e0581eff936932e5d4ac94cdd299cf3008577893a2a3a43a569cf859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bffdd5bf7-slkqb" Sep 12 17:43:21.430191 kubelet[3325]: E0912 17:43:21.430108 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bcbffd70814190aee0342ff5aa5bf9dbebdb9fc61ed8ef87cb14e6ea4ec1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc447859-2slhn" Sep 12 17:43:21.431018 kubelet[3325]: E0912 17:43:21.430145 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77fc447859-2slhn_calico-apiserver(1edbce95-ac87-4b0e-b3d0-642f106d2276)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77fc447859-2slhn_calico-apiserver(1edbce95-ac87-4b0e-b3d0-642f106d2276)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87bcbffd70814190aee0342ff5aa5bf9dbebdb9fc61ed8ef87cb14e6ea4ec1bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fc447859-2slhn" podUID="1edbce95-ac87-4b0e-b3d0-642f106d2276" Sep 12 17:43:21.431018 kubelet[3325]: E0912 17:43:21.430145 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bffdd5bf7-slkqb_calico-apiserver(c4bbfb4c-8cf3-4210-98c5-53617abee5f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bffdd5bf7-slkqb_calico-apiserver(c4bbfb4c-8cf3-4210-98c5-53617abee5f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5522ef1e0581eff936932e5d4ac94cdd299cf3008577893a2a3a43a569cf859b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bffdd5bf7-slkqb" podUID="c4bbfb4c-8cf3-4210-98c5-53617abee5f9" Sep 12 17:43:21.433120 containerd[1909]: time="2025-09-12T17:43:21.432525706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-88rqr,Uid:fa257bab-3021-4bf2-9b9d-56f7cf83a781,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f039db8eda676391278668134a5a2671f584cc7b3778a56182a5d3435f798735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.433259 kubelet[3325]: E0912 17:43:21.432725 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f039db8eda676391278668134a5a2671f584cc7b3778a56182a5d3435f798735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.433259 kubelet[3325]: E0912 17:43:21.432776 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f039db8eda676391278668134a5a2671f584cc7b3778a56182a5d3435f798735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-88rqr" Sep 12 17:43:21.433259 kubelet[3325]: E0912 17:43:21.432799 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f039db8eda676391278668134a5a2671f584cc7b3778a56182a5d3435f798735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-88rqr" Sep 12 17:43:21.433435 kubelet[3325]: E0912 17:43:21.432849 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-88rqr_kube-system(fa257bab-3021-4bf2-9b9d-56f7cf83a781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-88rqr_kube-system(fa257bab-3021-4bf2-9b9d-56f7cf83a781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f039db8eda676391278668134a5a2671f584cc7b3778a56182a5d3435f798735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-88rqr" podUID="fa257bab-3021-4bf2-9b9d-56f7cf83a781" Sep 12 17:43:21.435588 containerd[1909]: time="2025-09-12T17:43:21.435503896Z" level=error msg="Failed to destroy network for sandbox \"ae3816a0d6350004d4a66356195faec63047df2165de3153305395ec042a1f39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.440098 containerd[1909]: time="2025-09-12T17:43:21.440040529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lq2tr,Uid:97abfd70-fe53-4bd7-8261-3ab34278012c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3816a0d6350004d4a66356195faec63047df2165de3153305395ec042a1f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.440313 kubelet[3325]: E0912 17:43:21.440269 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3816a0d6350004d4a66356195faec63047df2165de3153305395ec042a1f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.440399 kubelet[3325]: E0912 17:43:21.440332 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3816a0d6350004d4a66356195faec63047df2165de3153305395ec042a1f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lq2tr" Sep 12 17:43:21.440399 kubelet[3325]: E0912 17:43:21.440362 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3816a0d6350004d4a66356195faec63047df2165de3153305395ec042a1f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lq2tr" Sep 12 17:43:21.440497 kubelet[3325]: E0912 17:43:21.440411 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lq2tr_kube-system(97abfd70-fe53-4bd7-8261-3ab34278012c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lq2tr_kube-system(97abfd70-fe53-4bd7-8261-3ab34278012c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae3816a0d6350004d4a66356195faec63047df2165de3153305395ec042a1f39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lq2tr" podUID="97abfd70-fe53-4bd7-8261-3ab34278012c" Sep 12 17:43:21.451140 containerd[1909]: time="2025-09-12T17:43:21.451071802Z" level=error msg="Failed to destroy network for sandbox \"21d8ef131371fa630b14d9446e5d0991f1989a80eed73a34b00a0cf10e883b62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.454225 containerd[1909]: time="2025-09-12T17:43:21.454170034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b4977774-rqh99,Uid:c4afc88f-9627-4122-bbaf-e9f08ff5e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d8ef131371fa630b14d9446e5d0991f1989a80eed73a34b00a0cf10e883b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.454703 kubelet[3325]: E0912 17:43:21.454665 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d8ef131371fa630b14d9446e5d0991f1989a80eed73a34b00a0cf10e883b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.455301 kubelet[3325]: E0912 17:43:21.454854 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d8ef131371fa630b14d9446e5d0991f1989a80eed73a34b00a0cf10e883b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b4977774-rqh99" Sep 12 17:43:21.455949 kubelet[3325]: E0912 17:43:21.455431 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d8ef131371fa630b14d9446e5d0991f1989a80eed73a34b00a0cf10e883b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b4977774-rqh99" Sep 12 17:43:21.455949 kubelet[3325]: E0912 17:43:21.455509 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b4977774-rqh99_calico-system(c4afc88f-9627-4122-bbaf-e9f08ff5e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b4977774-rqh99_calico-system(c4afc88f-9627-4122-bbaf-e9f08ff5e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21d8ef131371fa630b14d9446e5d0991f1989a80eed73a34b00a0cf10e883b62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b4977774-rqh99" podUID="c4afc88f-9627-4122-bbaf-e9f08ff5e1e2" Sep 12 17:43:21.465436 containerd[1909]: time="2025-09-12T17:43:21.465384277Z" level=error msg="Failed to destroy network for sandbox \"f9826267cbb7d4c68bca029e702a74cf51f2422280ea0251b5f934b43e7a785a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.466662 containerd[1909]: time="2025-09-12T17:43:21.466526813Z" level=error msg="Failed to destroy network for sandbox \"e999dd3dfccefc43db07ef279f950fcc0c4df2467901323f0a91de1524716f7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.467911 containerd[1909]: time="2025-09-12T17:43:21.467867733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-mp5hl,Uid:b6b62184-04f0-478f-9588-1262565a1301,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9826267cbb7d4c68bca029e702a74cf51f2422280ea0251b5f934b43e7a785a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.468451 kubelet[3325]: E0912 17:43:21.468390 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9826267cbb7d4c68bca029e702a74cf51f2422280ea0251b5f934b43e7a785a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.469011 kubelet[3325]: E0912 17:43:21.468558 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9826267cbb7d4c68bca029e702a74cf51f2422280ea0251b5f934b43e7a785a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-mp5hl" Sep 12 17:43:21.469011 kubelet[3325]: E0912 17:43:21.468590 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9826267cbb7d4c68bca029e702a74cf51f2422280ea0251b5f934b43e7a785a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-mp5hl" Sep 12 17:43:21.469011 kubelet[3325]: E0912 17:43:21.468655 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-mp5hl_calico-system(b6b62184-04f0-478f-9588-1262565a1301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-mp5hl_calico-system(b6b62184-04f0-478f-9588-1262565a1301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9826267cbb7d4c68bca029e702a74cf51f2422280ea0251b5f934b43e7a785a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-mp5hl" podUID="b6b62184-04f0-478f-9588-1262565a1301" Sep 12 17:43:21.469228 containerd[1909]: time="2025-09-12T17:43:21.468844459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794bcf8769-xdsh6,Uid:7496053f-e228-4c34-a1aa-75f4b5ba00bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e999dd3dfccefc43db07ef279f950fcc0c4df2467901323f0a91de1524716f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.469892 kubelet[3325]: E0912 17:43:21.469848 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e999dd3dfccefc43db07ef279f950fcc0c4df2467901323f0a91de1524716f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.470374 kubelet[3325]: E0912 17:43:21.469902 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e999dd3dfccefc43db07ef279f950fcc0c4df2467901323f0a91de1524716f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-794bcf8769-xdsh6" Sep 12 17:43:21.470374 kubelet[3325]: E0912 17:43:21.470035 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e999dd3dfccefc43db07ef279f950fcc0c4df2467901323f0a91de1524716f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-794bcf8769-xdsh6" Sep 12 17:43:21.470374 kubelet[3325]: E0912 17:43:21.470117 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-794bcf8769-xdsh6_calico-system(7496053f-e228-4c34-a1aa-75f4b5ba00bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-794bcf8769-xdsh6_calico-system(7496053f-e228-4c34-a1aa-75f4b5ba00bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e999dd3dfccefc43db07ef279f950fcc0c4df2467901323f0a91de1524716f7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-794bcf8769-xdsh6" podUID="7496053f-e228-4c34-a1aa-75f4b5ba00bb" Sep 12 17:43:21.547801 containerd[1909]: time="2025-09-12T17:43:21.547742122Z" level=error msg="Failed to destroy network for sandbox \"a6c40293ef116a8f84701057e31ece7b91b9feff45a533eede4187bd2bb08b96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.549192 containerd[1909]: time="2025-09-12T17:43:21.549096779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghf2w,Uid:a5151dfa-6356-4409-97e2-1942a154fe88,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c40293ef116a8f84701057e31ece7b91b9feff45a533eede4187bd2bb08b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.549434 kubelet[3325]: E0912 17:43:21.549369 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c40293ef116a8f84701057e31ece7b91b9feff45a533eede4187bd2bb08b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:21.549434 kubelet[3325]: E0912 17:43:21.549429 3325 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c40293ef116a8f84701057e31ece7b91b9feff45a533eede4187bd2bb08b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:21.549733 kubelet[3325]: E0912 17:43:21.549450 3325 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c40293ef116a8f84701057e31ece7b91b9feff45a533eede4187bd2bb08b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghf2w" Sep 12 17:43:21.549733 kubelet[3325]: E0912 17:43:21.549496 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghf2w_calico-system(a5151dfa-6356-4409-97e2-1942a154fe88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghf2w_calico-system(a5151dfa-6356-4409-97e2-1942a154fe88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6c40293ef116a8f84701057e31ece7b91b9feff45a533eede4187bd2bb08b96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghf2w" podUID="a5151dfa-6356-4409-97e2-1942a154fe88" Sep 12 17:43:21.606474 containerd[1909]: time="2025-09-12T17:43:21.605983527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:43:29.594047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341892911.mount: Deactivated successfully. Sep 12 17:43:29.649524 containerd[1909]: time="2025-09-12T17:43:29.649357460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:43:29.655837 containerd[1909]: time="2025-09-12T17:43:29.655783135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.049721991s" Sep 12 17:43:29.655837 containerd[1909]: time="2025-09-12T17:43:29.655833996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:43:29.675497 containerd[1909]: time="2025-09-12T17:43:29.636670796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:29.684508 containerd[1909]: time="2025-09-12T17:43:29.684461189Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:29.685627 containerd[1909]: time="2025-09-12T17:43:29.685597306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:29.713943 containerd[1909]: time="2025-09-12T17:43:29.713865625Z" level=info msg="CreateContainer within sandbox \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:43:29.799265 containerd[1909]: time="2025-09-12T17:43:29.799089985Z" level=info msg="Container 92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:29.799294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425137866.mount: Deactivated successfully. Sep 12 17:43:29.843852 containerd[1909]: time="2025-09-12T17:43:29.843804085Z" level=info msg="CreateContainer within sandbox \"e254db1abf5677803d937952b104599c4436cf6bb90b7633105f05e12179bc50\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\"" Sep 12 17:43:29.844730 containerd[1909]: time="2025-09-12T17:43:29.844484852Z" level=info msg="StartContainer for \"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\"" Sep 12 17:43:29.858300 containerd[1909]: time="2025-09-12T17:43:29.858251379Z" level=info msg="connecting to shim 92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0" address="unix:///run/containerd/s/47e9e46045315b5fde24dc8cb4d9d6bc08f884221597235acba4dad62eac5b64" protocol=ttrpc version=3 Sep 12 17:43:30.024131 systemd[1]: Started cri-containerd-92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0.scope - libcontainer container 92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0. Sep 12 17:43:30.140636 containerd[1909]: time="2025-09-12T17:43:30.140516917Z" level=info msg="StartContainer for \"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\" returns successfully" Sep 12 17:43:30.283203 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:43:30.285288 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:43:30.723841 kubelet[3325]: I0912 17:43:30.723760 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q56bp" podStartSLOduration=1.7836301799999998 podStartE2EDuration="21.723736307s" podCreationTimestamp="2025-09-12 17:43:09 +0000 UTC" firstStartedPulling="2025-09-12 17:43:09.744588151 +0000 UTC m=+21.582780451" lastFinishedPulling="2025-09-12 17:43:29.684694266 +0000 UTC m=+41.522886578" observedRunningTime="2025-09-12 17:43:30.713067548 +0000 UTC m=+42.551259869" watchObservedRunningTime="2025-09-12 17:43:30.723736307 +0000 UTC m=+42.561928628" Sep 12 17:43:30.785192 kubelet[3325]: I0912 17:43:30.784853 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkdzk\" (UniqueName: \"kubernetes.io/projected/7496053f-e228-4c34-a1aa-75f4b5ba00bb-kube-api-access-zkdzk\") pod \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\" (UID: \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\") " Sep 12 17:43:30.786151 kubelet[3325]: I0912 17:43:30.786118 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-ca-bundle\") pod \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\" (UID: \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\") " Sep 12 17:43:30.786262 kubelet[3325]: I0912 17:43:30.786163 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-backend-key-pair\") pod \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\" (UID: \"7496053f-e228-4c34-a1aa-75f4b5ba00bb\") " Sep 12 17:43:30.786683 kubelet[3325]: I0912 17:43:30.786650 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7496053f-e228-4c34-a1aa-75f4b5ba00bb" (UID: "7496053f-e228-4c34-a1aa-75f4b5ba00bb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:43:30.795944 kubelet[3325]: I0912 17:43:30.795699 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7496053f-e228-4c34-a1aa-75f4b5ba00bb" (UID: "7496053f-e228-4c34-a1aa-75f4b5ba00bb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:43:30.797292 systemd[1]: var-lib-kubelet-pods-7496053f\x2de228\x2d4c34\x2da1aa\x2d75f4b5ba00bb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:43:30.798476 kubelet[3325]: I0912 17:43:30.798100 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7496053f-e228-4c34-a1aa-75f4b5ba00bb-kube-api-access-zkdzk" (OuterVolumeSpecName: "kube-api-access-zkdzk") pod "7496053f-e228-4c34-a1aa-75f4b5ba00bb" (UID: "7496053f-e228-4c34-a1aa-75f4b5ba00bb"). InnerVolumeSpecName "kube-api-access-zkdzk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:43:30.804371 systemd[1]: var-lib-kubelet-pods-7496053f\x2de228\x2d4c34\x2da1aa\x2d75f4b5ba00bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzkdzk.mount: Deactivated successfully. Sep 12 17:43:30.898753 kubelet[3325]: I0912 17:43:30.898623 3325 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-ca-bundle\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:43:30.898753 kubelet[3325]: I0912 17:43:30.898662 3325 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7496053f-e228-4c34-a1aa-75f4b5ba00bb-whisker-backend-key-pair\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:43:30.898753 kubelet[3325]: I0912 17:43:30.898675 3325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkdzk\" (UniqueName: \"kubernetes.io/projected/7496053f-e228-4c34-a1aa-75f4b5ba00bb-kube-api-access-zkdzk\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:43:31.673602 systemd[1]: Removed slice kubepods-besteffort-pod7496053f_e228_4c34_a1aa_75f4b5ba00bb.slice - libcontainer container kubepods-besteffort-pod7496053f_e228_4c34_a1aa_75f4b5ba00bb.slice. Sep 12 17:43:31.847282 systemd[1]: Created slice kubepods-besteffort-pod7613a2ab_c7d7_4024_9f96_13c5bcdd72f9.slice - libcontainer container kubepods-besteffort-pod7613a2ab_c7d7_4024_9f96_13c5bcdd72f9.slice. Sep 12 17:43:31.908319 kubelet[3325]: I0912 17:43:31.908266 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7613a2ab-c7d7-4024-9f96-13c5bcdd72f9-whisker-ca-bundle\") pod \"whisker-696cfc44-m4v6j\" (UID: \"7613a2ab-c7d7-4024-9f96-13c5bcdd72f9\") " pod="calico-system/whisker-696cfc44-m4v6j" Sep 12 17:43:31.908319 kubelet[3325]: I0912 17:43:31.908314 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7613a2ab-c7d7-4024-9f96-13c5bcdd72f9-whisker-backend-key-pair\") pod \"whisker-696cfc44-m4v6j\" (UID: \"7613a2ab-c7d7-4024-9f96-13c5bcdd72f9\") " pod="calico-system/whisker-696cfc44-m4v6j" Sep 12 17:43:31.908747 kubelet[3325]: I0912 17:43:31.908335 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfd9r\" (UniqueName: \"kubernetes.io/projected/7613a2ab-c7d7-4024-9f96-13c5bcdd72f9-kube-api-access-xfd9r\") pod \"whisker-696cfc44-m4v6j\" (UID: \"7613a2ab-c7d7-4024-9f96-13c5bcdd72f9\") " pod="calico-system/whisker-696cfc44-m4v6j" Sep 12 17:43:32.153502 containerd[1909]: time="2025-09-12T17:43:32.153131461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-696cfc44-m4v6j,Uid:7613a2ab-c7d7-4024-9f96-13c5bcdd72f9,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:32.328538 kubelet[3325]: I0912 17:43:32.328486 3325 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7496053f-e228-4c34-a1aa-75f4b5ba00bb" path="/var/lib/kubelet/pods/7496053f-e228-4c34-a1aa-75f4b5ba00bb/volumes" Sep 12 17:43:33.079619 (udev-worker)[4655]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:33.095246 systemd-networkd[1813]: cali28721ee813f: Link UP Sep 12 17:43:33.095533 systemd-networkd[1813]: cali28721ee813f: Gained carrier Sep 12 17:43:33.137999 containerd[1909]: 2025-09-12 17:43:32.280 [INFO][4775] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:43:33.137999 containerd[1909]: 2025-09-12 17:43:32.359 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0 whisker-696cfc44- calico-system 7613a2ab-c7d7-4024-9f96-13c5bcdd72f9 934 0 2025-09-12 17:43:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:696cfc44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-53 whisker-696cfc44-m4v6j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28721ee813f [] [] }} ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-" Sep 12 17:43:33.137999 containerd[1909]: 2025-09-12 17:43:32.359 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.137999 containerd[1909]: 2025-09-12 17:43:32.951 [INFO][4783] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" HandleID="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Workload="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:32.956 [INFO][4783] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" HandleID="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Workload="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003995b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-53", "pod":"whisker-696cfc44-m4v6j", "timestamp":"2025-09-12 17:43:32.951889758 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:32.956 [INFO][4783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:32.958 [INFO][4783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:32.960 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:32.989 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" host="ip-172-31-19-53" Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:33.007 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:33.017 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:33.021 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:33.138352 containerd[1909]: 2025-09-12 17:43:33.025 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.026 [INFO][4783] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" host="ip-172-31-19-53" Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.030 [INFO][4783] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6 Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.039 [INFO][4783] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" host="ip-172-31-19-53" Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.051 [INFO][4783] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.129/26] block=192.168.102.128/26 handle="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" host="ip-172-31-19-53" Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.051 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.129/26] handle="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" host="ip-172-31-19-53" Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.051 [INFO][4783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:33.138733 containerd[1909]: 2025-09-12 17:43:33.051 [INFO][4783] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.129/26] IPv6=[] ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" HandleID="k8s-pod-network.01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Workload="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.139009 containerd[1909]: 2025-09-12 17:43:33.056 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0", GenerateName:"whisker-696cfc44-", Namespace:"calico-system", SelfLink:"", UID:"7613a2ab-c7d7-4024-9f96-13c5bcdd72f9", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"696cfc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"whisker-696cfc44-m4v6j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28721ee813f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:33.139009 containerd[1909]: 2025-09-12 17:43:33.058 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.129/32] ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.139147 containerd[1909]: 2025-09-12 17:43:33.058 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28721ee813f ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.139147 containerd[1909]: 2025-09-12 17:43:33.092 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.139234 containerd[1909]: 2025-09-12 17:43:33.094 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0", GenerateName:"whisker-696cfc44-", Namespace:"calico-system", SelfLink:"", UID:"7613a2ab-c7d7-4024-9f96-13c5bcdd72f9", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"696cfc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6", Pod:"whisker-696cfc44-m4v6j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28721ee813f", MAC:"b2:a3:df:74:c8:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:33.139322 containerd[1909]: 2025-09-12 17:43:33.115 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" Namespace="calico-system" Pod="whisker-696cfc44-m4v6j" WorkloadEndpoint="ip--172--31--19--53-k8s-whisker--696cfc44--m4v6j-eth0" Sep 12 17:43:33.198725 systemd-networkd[1813]: vxlan.calico: Link UP Sep 12 17:43:33.198735 systemd-networkd[1813]: vxlan.calico: Gained carrier Sep 12 17:43:33.227256 (udev-worker)[4657]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:33.228503 (udev-worker)[4656]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:33.361278 containerd[1909]: time="2025-09-12T17:43:33.361066071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-2slhn,Uid:1edbce95-ac87-4b0e-b3d0-642f106d2276,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:33.643726 containerd[1909]: time="2025-09-12T17:43:33.643384669Z" level=info msg="connecting to shim 01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6" address="unix:///run/containerd/s/0a5ec839e23672f5aea9a62e1bc06bb0e95b112c890ccaa91d76f67c9d84099a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:33.703551 systemd[1]: Started cri-containerd-01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6.scope - libcontainer container 01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6. Sep 12 17:43:33.730117 systemd-networkd[1813]: cali3b2fe3b4e85: Link UP Sep 12 17:43:33.733072 systemd-networkd[1813]: cali3b2fe3b4e85: Gained carrier Sep 12 17:43:33.778680 containerd[1909]: 2025-09-12 17:43:33.508 [INFO][4877] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0 calico-apiserver-77fc447859- calico-apiserver 1edbce95-ac87-4b0e-b3d0-642f106d2276 853 0 2025-09-12 17:43:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77fc447859 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-53 calico-apiserver-77fc447859-2slhn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b2fe3b4e85 [] [] }} ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-" Sep 12 17:43:33.778680 containerd[1909]: 2025-09-12 17:43:33.508 [INFO][4877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.778680 containerd[1909]: 2025-09-12 17:43:33.599 [INFO][4887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.599 [INFO][4887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-53", "pod":"calico-apiserver-77fc447859-2slhn", "timestamp":"2025-09-12 17:43:33.599067793 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.599 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.599 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.599 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.627 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" host="ip-172-31-19-53" Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.641 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.649 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.653 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:33.779448 containerd[1909]: 2025-09-12 17:43:33.666 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.666 [INFO][4887] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" host="ip-172-31-19-53" Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.670 [INFO][4887] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.684 [INFO][4887] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" host="ip-172-31-19-53" Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.702 [INFO][4887] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.130/26] block=192.168.102.128/26 handle="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" host="ip-172-31-19-53" Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.702 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.130/26] handle="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" host="ip-172-31-19-53" Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.702 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:33.779851 containerd[1909]: 2025-09-12 17:43:33.703 [INFO][4887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.130/26] IPv6=[] ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.784145 containerd[1909]: 2025-09-12 17:43:33.715 [INFO][4877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0", GenerateName:"calico-apiserver-77fc447859-", Namespace:"calico-apiserver", SelfLink:"", UID:"1edbce95-ac87-4b0e-b3d0-642f106d2276", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc447859", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"calico-apiserver-77fc447859-2slhn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b2fe3b4e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:33.784262 containerd[1909]: 2025-09-12 17:43:33.718 [INFO][4877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.130/32] ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.784262 containerd[1909]: 2025-09-12 17:43:33.718 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b2fe3b4e85 ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.784262 containerd[1909]: 2025-09-12 17:43:33.737 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.784388 containerd[1909]: 2025-09-12 17:43:33.739 [INFO][4877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0", GenerateName:"calico-apiserver-77fc447859-", Namespace:"calico-apiserver", SelfLink:"", UID:"1edbce95-ac87-4b0e-b3d0-642f106d2276", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc447859", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c", Pod:"calico-apiserver-77fc447859-2slhn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b2fe3b4e85", MAC:"ee:a1:19:40:e4:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:33.784479 containerd[1909]: 2025-09-12 17:43:33.765 [INFO][4877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-2slhn" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:43:33.870189 containerd[1909]: time="2025-09-12T17:43:33.870121300Z" level=info msg="connecting to shim 1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" address="unix:///run/containerd/s/c20a71d0c6a7f17e491b0edcf5aa19981df6a0b84db1fb0778d067f2676beb90" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:33.955299 systemd[1]: Started cri-containerd-1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c.scope - libcontainer container 1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c. Sep 12 17:43:33.977008 containerd[1909]: time="2025-09-12T17:43:33.976696482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-696cfc44-m4v6j,Uid:7613a2ab-c7d7-4024-9f96-13c5bcdd72f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6\"" Sep 12 17:43:34.028857 containerd[1909]: time="2025-09-12T17:43:34.028019464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:43:34.122178 containerd[1909]: time="2025-09-12T17:43:34.122135068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-2slhn,Uid:1edbce95-ac87-4b0e-b3d0-642f106d2276,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\"" Sep 12 17:43:34.326676 containerd[1909]: time="2025-09-12T17:43:34.326328355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffdd5bf7-slkqb,Uid:c4bbfb4c-8cf3-4210-98c5-53617abee5f9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:34.327021 containerd[1909]: time="2025-09-12T17:43:34.326896028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-88rqr,Uid:fa257bab-3021-4bf2-9b9d-56f7cf83a781,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:34.440772 systemd-networkd[1813]: cali28721ee813f: Gained IPv6LL Sep 12 17:43:34.556573 systemd-networkd[1813]: cali4a83b8cb087: Link UP Sep 12 17:43:34.557150 systemd-networkd[1813]: cali4a83b8cb087: Gained carrier Sep 12 17:43:34.628792 containerd[1909]: 2025-09-12 17:43:34.413 [INFO][5027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0 calico-apiserver-6bffdd5bf7- calico-apiserver c4bbfb4c-8cf3-4210-98c5-53617abee5f9 861 0 2025-09-12 17:43:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bffdd5bf7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-53 calico-apiserver-6bffdd5bf7-slkqb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a83b8cb087 [] [] }} ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-" Sep 12 17:43:34.628792 containerd[1909]: 2025-09-12 17:43:34.413 [INFO][5027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.628792 containerd[1909]: 2025-09-12 17:43:34.472 [INFO][5051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" HandleID="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Workload="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.473 [INFO][5051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" HandleID="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Workload="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-53", "pod":"calico-apiserver-6bffdd5bf7-slkqb", "timestamp":"2025-09-12 17:43:34.472850289 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.473 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.473 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.473 [INFO][5051] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.484 [INFO][5051] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" host="ip-172-31-19-53" Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.493 [INFO][5051] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.499 [INFO][5051] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.502 [INFO][5051] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:34.629431 containerd[1909]: 2025-09-12 17:43:34.505 [INFO][5051] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.505 [INFO][5051] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" host="ip-172-31-19-53" Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.507 [INFO][5051] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0 Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.513 [INFO][5051] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" host="ip-172-31-19-53" Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.538 [INFO][5051] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.131/26] block=192.168.102.128/26 handle="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" host="ip-172-31-19-53" Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.539 [INFO][5051] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.131/26] handle="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" host="ip-172-31-19-53" Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.539 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:34.630296 containerd[1909]: 2025-09-12 17:43:34.539 [INFO][5051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.131/26] IPv6=[] ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" HandleID="k8s-pod-network.11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Workload="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.630617 containerd[1909]: 2025-09-12 17:43:34.543 [INFO][5027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0", GenerateName:"calico-apiserver-6bffdd5bf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4bbfb4c-8cf3-4210-98c5-53617abee5f9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffdd5bf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"calico-apiserver-6bffdd5bf7-slkqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a83b8cb087", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:34.630711 containerd[1909]: 2025-09-12 17:43:34.544 [INFO][5027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.131/32] ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.630711 containerd[1909]: 2025-09-12 17:43:34.544 [INFO][5027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a83b8cb087 ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.630711 containerd[1909]: 2025-09-12 17:43:34.558 [INFO][5027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.630832 containerd[1909]: 2025-09-12 17:43:34.564 [INFO][5027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0", GenerateName:"calico-apiserver-6bffdd5bf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4bbfb4c-8cf3-4210-98c5-53617abee5f9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffdd5bf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0", Pod:"calico-apiserver-6bffdd5bf7-slkqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a83b8cb087", MAC:"de:6f:75:d5:cb:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:34.635031 containerd[1909]: 2025-09-12 17:43:34.623 [INFO][5027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-slkqb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--slkqb-eth0" Sep 12 17:43:34.631719 systemd-networkd[1813]: vxlan.calico: Gained IPv6LL Sep 12 17:43:34.686961 containerd[1909]: time="2025-09-12T17:43:34.686733003Z" level=info msg="connecting to shim 11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0" address="unix:///run/containerd/s/7313652e2abb84ef6df58aea6cf368723da31e77c2aaa34a1f9b68e4fb198295" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:34.736180 systemd[1]: Started cri-containerd-11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0.scope - libcontainer container 11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0. Sep 12 17:43:34.759145 systemd-networkd[1813]: calid62cd2801a0: Link UP Sep 12 17:43:34.764960 systemd-networkd[1813]: calid62cd2801a0: Gained carrier Sep 12 17:43:34.802720 containerd[1909]: 2025-09-12 17:43:34.427 [INFO][5032] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0 coredns-668d6bf9bc- kube-system fa257bab-3021-4bf2-9b9d-56f7cf83a781 848 0 2025-09-12 17:42:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-53 coredns-668d6bf9bc-88rqr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid62cd2801a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-" Sep 12 17:43:34.802720 containerd[1909]: 2025-09-12 17:43:34.428 [INFO][5032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.802720 containerd[1909]: 2025-09-12 17:43:34.484 [INFO][5056] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" HandleID="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Workload="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.484 [INFO][5056] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" HandleID="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Workload="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-53", "pod":"coredns-668d6bf9bc-88rqr", "timestamp":"2025-09-12 17:43:34.484170373 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.484 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.539 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.540 [INFO][5056] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.588 [INFO][5056] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" host="ip-172-31-19-53" Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.597 [INFO][5056] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.606 [INFO][5056] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.619 [INFO][5056] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:34.803032 containerd[1909]: 2025-09-12 17:43:34.623 [INFO][5056] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.626 [INFO][5056] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" host="ip-172-31-19-53" Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.635 [INFO][5056] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.691 [INFO][5056] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" host="ip-172-31-19-53" Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.739 [INFO][5056] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.132/26] block=192.168.102.128/26 handle="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" host="ip-172-31-19-53" Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.740 [INFO][5056] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.132/26] handle="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" host="ip-172-31-19-53" Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.740 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:34.804131 containerd[1909]: 2025-09-12 17:43:34.740 [INFO][5056] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.132/26] IPv6=[] ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" HandleID="k8s-pod-network.70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Workload="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.804524 containerd[1909]: 2025-09-12 17:43:34.745 [INFO][5032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa257bab-3021-4bf2-9b9d-56f7cf83a781", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"coredns-668d6bf9bc-88rqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid62cd2801a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:34.804524 containerd[1909]: 2025-09-12 17:43:34.745 [INFO][5032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.132/32] ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.804524 containerd[1909]: 2025-09-12 17:43:34.745 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid62cd2801a0 ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.804524 containerd[1909]: 2025-09-12 17:43:34.776 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.804524 containerd[1909]: 2025-09-12 17:43:34.778 [INFO][5032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa257bab-3021-4bf2-9b9d-56f7cf83a781", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b", Pod:"coredns-668d6bf9bc-88rqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid62cd2801a0", MAC:"86:5d:ff:87:33:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:34.804524 containerd[1909]: 2025-09-12 17:43:34.795 [INFO][5032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" Namespace="kube-system" Pod="coredns-668d6bf9bc-88rqr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--88rqr-eth0" Sep 12 17:43:34.889831 containerd[1909]: time="2025-09-12T17:43:34.889685638Z" level=info msg="connecting to shim 70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b" address="unix:///run/containerd/s/1da414ad9711e55139b5e6d9cb7981cfcb9a79a2ee940d3e32c56b4a279099c0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:34.940518 systemd[1]: Started cri-containerd-70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b.scope - libcontainer container 70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b. Sep 12 17:43:34.970983 containerd[1909]: time="2025-09-12T17:43:34.970891706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffdd5bf7-slkqb,Uid:c4bbfb4c-8cf3-4210-98c5-53617abee5f9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0\"" Sep 12 17:43:35.016369 containerd[1909]: time="2025-09-12T17:43:35.016323188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-88rqr,Uid:fa257bab-3021-4bf2-9b9d-56f7cf83a781,Namespace:kube-system,Attempt:0,} returns sandbox id \"70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b\"" Sep 12 17:43:35.057317 containerd[1909]: time="2025-09-12T17:43:35.057236622Z" level=info msg="CreateContainer within sandbox \"70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:43:35.083737 containerd[1909]: time="2025-09-12T17:43:35.083573218Z" level=info msg="Container c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:35.094903 containerd[1909]: time="2025-09-12T17:43:35.094821401Z" level=info msg="CreateContainer within sandbox \"70790e0933cfc4b2b3e8a992e65b8b9d6d9ce646124cbcbc04cd7c7482a7a03b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72\"" Sep 12 17:43:35.096119 containerd[1909]: time="2025-09-12T17:43:35.095793357Z" level=info msg="StartContainer for \"c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72\"" Sep 12 17:43:35.098331 containerd[1909]: time="2025-09-12T17:43:35.098287335Z" level=info msg="connecting to shim c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72" address="unix:///run/containerd/s/1da414ad9711e55139b5e6d9cb7981cfcb9a79a2ee940d3e32c56b4a279099c0" protocol=ttrpc version=3 Sep 12 17:43:35.124173 systemd[1]: Started cri-containerd-c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72.scope - libcontainer container c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72. Sep 12 17:43:35.171051 containerd[1909]: time="2025-09-12T17:43:35.170574810Z" level=info msg="StartContainer for \"c08f035fe2608aaf089ff71fa710c8083784f2f341e9db140124997de6227b72\" returns successfully" Sep 12 17:43:35.325204 containerd[1909]: time="2025-09-12T17:43:35.324737674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b4977774-rqh99,Uid:c4afc88f-9627-4122-bbaf-e9f08ff5e1e2,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:35.608898 systemd-networkd[1813]: cali50f07b9a552: Link UP Sep 12 17:43:35.611431 systemd-networkd[1813]: cali50f07b9a552: Gained carrier Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.390 [INFO][5206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0 calico-kube-controllers-55b4977774- calico-system c4afc88f-9627-4122-bbaf-e9f08ff5e1e2 863 0 2025-09-12 17:43:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55b4977774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-53 calico-kube-controllers-55b4977774-rqh99 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali50f07b9a552 [] [] }} ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.391 [INFO][5206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.498 [INFO][5219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" HandleID="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Workload="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.498 [INFO][5219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" HandleID="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Workload="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036efd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-53", "pod":"calico-kube-controllers-55b4977774-rqh99", "timestamp":"2025-09-12 17:43:35.498335104 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.498 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.498 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.498 [INFO][5219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.509 [INFO][5219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.520 [INFO][5219] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.539 [INFO][5219] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.544 [INFO][5219] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.554 [INFO][5219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.554 [INFO][5219] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.561 [INFO][5219] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068 Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.571 [INFO][5219] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.594 [INFO][5219] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.133/26] block=192.168.102.128/26 handle="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.594 [INFO][5219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.133/26] handle="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" host="ip-172-31-19-53" Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.594 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:35.647641 containerd[1909]: 2025-09-12 17:43:35.595 [INFO][5219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.133/26] IPv6=[] ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" HandleID="k8s-pod-network.68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Workload="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.651366 containerd[1909]: 2025-09-12 17:43:35.601 [INFO][5206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0", GenerateName:"calico-kube-controllers-55b4977774-", Namespace:"calico-system", SelfLink:"", UID:"c4afc88f-9627-4122-bbaf-e9f08ff5e1e2", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b4977774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"calico-kube-controllers-55b4977774-rqh99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50f07b9a552", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:35.651366 containerd[1909]: 2025-09-12 17:43:35.602 [INFO][5206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.133/32] ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.651366 containerd[1909]: 2025-09-12 17:43:35.602 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50f07b9a552 ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.651366 containerd[1909]: 2025-09-12 17:43:35.608 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.651366 containerd[1909]: 2025-09-12 17:43:35.610 [INFO][5206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0", GenerateName:"calico-kube-controllers-55b4977774-", Namespace:"calico-system", SelfLink:"", UID:"c4afc88f-9627-4122-bbaf-e9f08ff5e1e2", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b4977774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068", Pod:"calico-kube-controllers-55b4977774-rqh99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50f07b9a552", MAC:"02:f8:c0:2f:b3:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:35.651366 containerd[1909]: 2025-09-12 17:43:35.630 [INFO][5206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" Namespace="calico-system" Pod="calico-kube-controllers-55b4977774-rqh99" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--kube--controllers--55b4977774--rqh99-eth0" Sep 12 17:43:35.656385 systemd-networkd[1813]: cali3b2fe3b4e85: Gained IPv6LL Sep 12 17:43:35.734944 containerd[1909]: time="2025-09-12T17:43:35.733009253Z" level=info msg="connecting to shim 68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068" address="unix:///run/containerd/s/9300199ea7b094666a0763ec2586f94208999362a01489eea02eabd11c557aed" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:35.741478 kubelet[3325]: I0912 17:43:35.741354 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:35.870464 systemd[1]: Started cri-containerd-68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068.scope - libcontainer container 68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068. Sep 12 17:43:35.945797 kubelet[3325]: I0912 17:43:35.937980 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-88rqr" podStartSLOduration=42.937941981 podStartE2EDuration="42.937941981s" podCreationTimestamp="2025-09-12 17:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:35.932642947 +0000 UTC m=+47.770835268" watchObservedRunningTime="2025-09-12 17:43:35.937941981 +0000 UTC m=+47.776134299" Sep 12 17:43:35.976242 systemd-networkd[1813]: cali4a83b8cb087: Gained IPv6LL Sep 12 17:43:36.061295 containerd[1909]: time="2025-09-12T17:43:36.061202218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.064703 containerd[1909]: time="2025-09-12T17:43:36.064659960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:43:36.066494 containerd[1909]: time="2025-09-12T17:43:36.066458483Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.070338 containerd[1909]: time="2025-09-12T17:43:36.070289592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.073230 containerd[1909]: time="2025-09-12T17:43:36.073150287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.045072463s" Sep 12 17:43:36.073230 containerd[1909]: time="2025-09-12T17:43:36.073214028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:43:36.086501 containerd[1909]: time="2025-09-12T17:43:36.086211254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:43:36.094948 containerd[1909]: time="2025-09-12T17:43:36.094161533Z" level=info msg="CreateContainer within sandbox \"01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:43:36.129680 containerd[1909]: time="2025-09-12T17:43:36.126001262Z" level=info msg="Container bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:36.151573 containerd[1909]: time="2025-09-12T17:43:36.151420838Z" level=info msg="CreateContainer within sandbox \"01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57\"" Sep 12 17:43:36.153964 containerd[1909]: time="2025-09-12T17:43:36.153908991Z" level=info msg="StartContainer for \"bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57\"" Sep 12 17:43:36.155854 containerd[1909]: time="2025-09-12T17:43:36.155816484Z" level=info msg="connecting to shim bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57" address="unix:///run/containerd/s/0a5ec839e23672f5aea9a62e1bc06bb0e95b112c890ccaa91d76f67c9d84099a" protocol=ttrpc version=3 Sep 12 17:43:36.203063 containerd[1909]: time="2025-09-12T17:43:36.201954741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b4977774-rqh99,Uid:c4afc88f-9627-4122-bbaf-e9f08ff5e1e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068\"" Sep 12 17:43:36.225798 systemd[1]: Started cri-containerd-bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57.scope - libcontainer container bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57. Sep 12 17:43:36.338403 containerd[1909]: time="2025-09-12T17:43:36.337605435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\" id:\"6a4efba0b245a1eacd9b4117688c6adc6d645fabb69b406eb612a40d93e25168\" pid:5293 exited_at:{seconds:1757699016 nanos:333730088}" Sep 12 17:43:36.343040 containerd[1909]: time="2025-09-12T17:43:36.342987353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-mp5hl,Uid:b6b62184-04f0-478f-9588-1262565a1301,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:36.343659 containerd[1909]: time="2025-09-12T17:43:36.343628180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lq2tr,Uid:97abfd70-fe53-4bd7-8261-3ab34278012c,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:36.344370 containerd[1909]: time="2025-09-12T17:43:36.344336333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-l6lrq,Uid:14280669-80b2-47e3-8948-1a09fa3ad5a8,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:36.454733 containerd[1909]: time="2025-09-12T17:43:36.454122552Z" level=info msg="StartContainer for \"bf7d62eea4eadeca5414b267162706b331a7dd7db6353b5c811b29fc35a61e57\" returns successfully" Sep 12 17:43:36.551126 systemd-networkd[1813]: calid62cd2801a0: Gained IPv6LL Sep 12 17:43:36.813157 systemd-networkd[1813]: calibad42879e39: Link UP Sep 12 17:43:36.814388 systemd-networkd[1813]: calibad42879e39: Gained carrier Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.523 [INFO][5337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0 coredns-668d6bf9bc- kube-system 97abfd70-fe53-4bd7-8261-3ab34278012c 850 0 2025-09-12 17:42:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-53 coredns-668d6bf9bc-lq2tr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibad42879e39 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.526 [INFO][5337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.685 [INFO][5400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" HandleID="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Workload="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.686 [INFO][5400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" HandleID="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Workload="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-53", "pod":"coredns-668d6bf9bc-lq2tr", "timestamp":"2025-09-12 17:43:36.685598285 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.686 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.686 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.686 [INFO][5400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.709 [INFO][5400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.720 [INFO][5400] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.733 [INFO][5400] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.740 [INFO][5400] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.750 [INFO][5400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.751 [INFO][5400] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.757 [INFO][5400] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635 Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.773 [INFO][5400] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.800 [INFO][5400] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.134/26] block=192.168.102.128/26 handle="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.801 [INFO][5400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.134/26] handle="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" host="ip-172-31-19-53" Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.801 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:36.864563 containerd[1909]: 2025-09-12 17:43:36.801 [INFO][5400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.134/26] IPv6=[] ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" HandleID="k8s-pod-network.67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Workload="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.865819 containerd[1909]: 2025-09-12 17:43:36.805 [INFO][5337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"97abfd70-fe53-4bd7-8261-3ab34278012c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"coredns-668d6bf9bc-lq2tr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad42879e39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:36.865819 containerd[1909]: 2025-09-12 17:43:36.805 [INFO][5337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.134/32] ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.865819 containerd[1909]: 2025-09-12 17:43:36.805 [INFO][5337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibad42879e39 ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.865819 containerd[1909]: 2025-09-12 17:43:36.812 [INFO][5337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.865819 containerd[1909]: 2025-09-12 17:43:36.815 [INFO][5337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"97abfd70-fe53-4bd7-8261-3ab34278012c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635", Pod:"coredns-668d6bf9bc-lq2tr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad42879e39", MAC:"ce:9e:98:b4:d8:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:36.865819 containerd[1909]: 2025-09-12 17:43:36.850 [INFO][5337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" Namespace="kube-system" Pod="coredns-668d6bf9bc-lq2tr" WorkloadEndpoint="ip--172--31--19--53-k8s-coredns--668d6bf9bc--lq2tr-eth0" Sep 12 17:43:36.936146 containerd[1909]: time="2025-09-12T17:43:36.936082592Z" level=info msg="connecting to shim 67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635" address="unix:///run/containerd/s/a7c9668f7a94fcad2ce2fbea0b9617aaffbb235f178fa45dbbc6a0296cf7f1a0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:36.997099 systemd-networkd[1813]: cali60a75a350a8: Link UP Sep 12 17:43:36.999561 systemd-networkd[1813]: cali60a75a350a8: Gained carrier Sep 12 17:43:37.033168 systemd[1]: Started cri-containerd-67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635.scope - libcontainer container 67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635. Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.669 [INFO][5355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0 calico-apiserver-77fc447859- calico-apiserver 14280669-80b2-47e3-8948-1a09fa3ad5a8 862 0 2025-09-12 17:43:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77fc447859 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-53 calico-apiserver-77fc447859-l6lrq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali60a75a350a8 [] [] }} ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.669 [INFO][5355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.763 [INFO][5416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.764 [INFO][5416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025d850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-53", "pod":"calico-apiserver-77fc447859-l6lrq", "timestamp":"2025-09-12 17:43:36.763593836 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.765 [INFO][5416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.803 [INFO][5416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.803 [INFO][5416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.860 [INFO][5416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.884 [INFO][5416] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.900 [INFO][5416] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.905 [INFO][5416] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.912 [INFO][5416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.913 [INFO][5416] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.917 [INFO][5416] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.928 [INFO][5416] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.955 [INFO][5416] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.135/26] block=192.168.102.128/26 handle="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.956 [INFO][5416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.135/26] handle="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" host="ip-172-31-19-53" Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.957 [INFO][5416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:37.051315 containerd[1909]: 2025-09-12 17:43:36.959 [INFO][5416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.135/26] IPv6=[] ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.053098 containerd[1909]: 2025-09-12 17:43:36.975 [INFO][5355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0", GenerateName:"calico-apiserver-77fc447859-", Namespace:"calico-apiserver", SelfLink:"", UID:"14280669-80b2-47e3-8948-1a09fa3ad5a8", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc447859", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"calico-apiserver-77fc447859-l6lrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60a75a350a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:37.053098 containerd[1909]: 2025-09-12 17:43:36.977 [INFO][5355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.135/32] ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.053098 containerd[1909]: 2025-09-12 17:43:36.977 [INFO][5355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60a75a350a8 ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.053098 containerd[1909]: 2025-09-12 17:43:37.002 [INFO][5355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.053098 containerd[1909]: 2025-09-12 17:43:37.006 [INFO][5355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0", GenerateName:"calico-apiserver-77fc447859-", Namespace:"calico-apiserver", SelfLink:"", UID:"14280669-80b2-47e3-8948-1a09fa3ad5a8", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc447859", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb", Pod:"calico-apiserver-77fc447859-l6lrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60a75a350a8", MAC:"06:bd:6c:18:79:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:37.053098 containerd[1909]: 2025-09-12 17:43:37.039 [INFO][5355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Namespace="calico-apiserver" Pod="calico-apiserver-77fc447859-l6lrq" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:37.109490 containerd[1909]: time="2025-09-12T17:43:37.109122202Z" level=info msg="connecting to shim 8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" address="unix:///run/containerd/s/8b4e5ecc940cfccd377fb164120c74ceed8a058242f1ce3440b707686cc6a589" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:37.128261 systemd-networkd[1813]: calieb6e530864e: Link UP Sep 12 17:43:37.130153 systemd-networkd[1813]: calieb6e530864e: Gained carrier Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.682 [INFO][5334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0 goldmane-54d579b49d- calico-system b6b62184-04f0-478f-9588-1262565a1301 858 0 2025-09-12 17:43:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-53 goldmane-54d579b49d-mp5hl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calieb6e530864e [] [] }} ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.683 [INFO][5334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.865 [INFO][5422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" HandleID="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Workload="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.865 [INFO][5422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" HandleID="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Workload="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-53", "pod":"goldmane-54d579b49d-mp5hl", "timestamp":"2025-09-12 17:43:36.865038945 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.865 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.957 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.959 [INFO][5422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:36.995 [INFO][5422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.016 [INFO][5422] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.047 [INFO][5422] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.053 [INFO][5422] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.058 [INFO][5422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.058 [INFO][5422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.063 [INFO][5422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.084 [INFO][5422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.101 [INFO][5422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.136/26] block=192.168.102.128/26 handle="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.102 [INFO][5422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.136/26] handle="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" host="ip-172-31-19-53" Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.102 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:37.176335 containerd[1909]: 2025-09-12 17:43:37.102 [INFO][5422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.136/26] IPv6=[] ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" HandleID="k8s-pod-network.2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Workload="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.178400 containerd[1909]: 2025-09-12 17:43:37.115 [INFO][5334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b6b62184-04f0-478f-9588-1262565a1301", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"goldmane-54d579b49d-mp5hl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieb6e530864e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:37.178400 containerd[1909]: 2025-09-12 17:43:37.115 [INFO][5334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.136/32] ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.178400 containerd[1909]: 2025-09-12 17:43:37.117 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb6e530864e ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.178400 containerd[1909]: 2025-09-12 17:43:37.134 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.178400 containerd[1909]: 2025-09-12 17:43:37.137 [INFO][5334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b6b62184-04f0-478f-9588-1262565a1301", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c", Pod:"goldmane-54d579b49d-mp5hl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieb6e530864e", MAC:"1a:67:c8:f2:19:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:37.178400 containerd[1909]: 2025-09-12 17:43:37.164 [INFO][5334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" Namespace="calico-system" Pod="goldmane-54d579b49d-mp5hl" WorkloadEndpoint="ip--172--31--19--53-k8s-goldmane--54d579b49d--mp5hl-eth0" Sep 12 17:43:37.198748 systemd[1]: Started cri-containerd-8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb.scope - libcontainer container 8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb. Sep 12 17:43:37.215318 containerd[1909]: time="2025-09-12T17:43:37.215274145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\" id:\"edd0938423c789470957247c6fd12a68e2947e97def73b0627738238e9b1de3a\" pid:5390 exited_at:{seconds:1757699017 nanos:213966772}" Sep 12 17:43:37.217611 containerd[1909]: time="2025-09-12T17:43:37.217564570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lq2tr,Uid:97abfd70-fe53-4bd7-8261-3ab34278012c,Namespace:kube-system,Attempt:0,} returns sandbox id \"67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635\"" Sep 12 17:43:37.224612 containerd[1909]: time="2025-09-12T17:43:37.224559983Z" level=info msg="CreateContainer within sandbox \"67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:43:37.238950 containerd[1909]: time="2025-09-12T17:43:37.238666237Z" level=info msg="Container 2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:37.241790 containerd[1909]: time="2025-09-12T17:43:37.241745247Z" level=info msg="connecting to shim 2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c" address="unix:///run/containerd/s/bf48507f76c4f57f9cdbb534142ebdef7d1d1df65f8a6a6509dfa156c86b7d11" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:37.251952 containerd[1909]: time="2025-09-12T17:43:37.251365150Z" level=info msg="CreateContainer within sandbox \"67647acedd418e0af284321576aa022c65a8dd81302844e32ffcaf43933f9635\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084\"" Sep 12 17:43:37.254402 containerd[1909]: time="2025-09-12T17:43:37.254304832Z" level=info msg="StartContainer for \"2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084\"" Sep 12 17:43:37.268257 containerd[1909]: time="2025-09-12T17:43:37.268207361Z" level=info msg="connecting to shim 2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084" address="unix:///run/containerd/s/a7c9668f7a94fcad2ce2fbea0b9617aaffbb235f178fa45dbbc6a0296cf7f1a0" protocol=ttrpc version=3 Sep 12 17:43:37.322695 systemd[1]: Started cri-containerd-2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084.scope - libcontainer container 2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084. Sep 12 17:43:37.327286 containerd[1909]: time="2025-09-12T17:43:37.327245801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghf2w,Uid:a5151dfa-6356-4409-97e2-1942a154fe88,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:37.331339 systemd[1]: Started cri-containerd-2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c.scope - libcontainer container 2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c. Sep 12 17:43:37.490189 containerd[1909]: time="2025-09-12T17:43:37.489364792Z" level=info msg="StartContainer for \"2b828472cf94fa62ec9074d5d2c5626ba9de7eb09f8f113ca70d82cafadd0084\" returns successfully" Sep 12 17:43:37.499054 containerd[1909]: time="2025-09-12T17:43:37.498989483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc447859-l6lrq,Uid:14280669-80b2-47e3-8948-1a09fa3ad5a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\"" Sep 12 17:43:37.511087 systemd-networkd[1813]: cali50f07b9a552: Gained IPv6LL Sep 12 17:43:37.672384 systemd-networkd[1813]: cali639963c7542: Link UP Sep 12 17:43:37.674517 systemd-networkd[1813]: cali639963c7542: Gained carrier Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.490 [INFO][5589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0 csi-node-driver- calico-system a5151dfa-6356-4409-97e2-1942a154fe88 740 0 2025-09-12 17:43:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-53 csi-node-driver-ghf2w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali639963c7542 [] [] }} ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.490 [INFO][5589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.582 [INFO][5634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" HandleID="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Workload="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.582 [INFO][5634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" HandleID="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Workload="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f990), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-53", "pod":"csi-node-driver-ghf2w", "timestamp":"2025-09-12 17:43:37.582138451 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.582 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.583 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.583 [INFO][5634] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.606 [INFO][5634] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.616 [INFO][5634] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.623 [INFO][5634] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.627 [INFO][5634] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.632 [INFO][5634] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.632 [INFO][5634] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.634 [INFO][5634] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.642 [INFO][5634] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.657 [INFO][5634] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.137/26] block=192.168.102.128/26 handle="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.658 [INFO][5634] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.137/26] handle="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" host="ip-172-31-19-53" Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.658 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:37.733291 containerd[1909]: 2025-09-12 17:43:37.658 [INFO][5634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.137/26] IPv6=[] ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" HandleID="k8s-pod-network.1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Workload="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.735792 containerd[1909]: 2025-09-12 17:43:37.661 [INFO][5589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5151dfa-6356-4409-97e2-1942a154fe88", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"csi-node-driver-ghf2w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali639963c7542", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:37.735792 containerd[1909]: 2025-09-12 17:43:37.661 [INFO][5589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.137/32] ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.735792 containerd[1909]: 2025-09-12 17:43:37.662 [INFO][5589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali639963c7542 ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.735792 containerd[1909]: 2025-09-12 17:43:37.692 [INFO][5589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.735792 containerd[1909]: 2025-09-12 17:43:37.697 [INFO][5589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5151dfa-6356-4409-97e2-1942a154fe88", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de", Pod:"csi-node-driver-ghf2w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali639963c7542", MAC:"7a:53:f8:98:4a:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:37.735792 containerd[1909]: 2025-09-12 17:43:37.723 [INFO][5589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" Namespace="calico-system" Pod="csi-node-driver-ghf2w" WorkloadEndpoint="ip--172--31--19--53-k8s-csi--node--driver--ghf2w-eth0" Sep 12 17:43:37.754015 containerd[1909]: time="2025-09-12T17:43:37.752734343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-mp5hl,Uid:b6b62184-04f0-478f-9588-1262565a1301,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c\"" Sep 12 17:43:37.815154 containerd[1909]: time="2025-09-12T17:43:37.815104218Z" level=info msg="connecting to shim 1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de" address="unix:///run/containerd/s/5fe7be6f7a4e24ec59db3f9dbd049867452294dcb6bf9ff4487c9b2c1de36004" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:37.860038 kubelet[3325]: I0912 17:43:37.859891 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lq2tr" podStartSLOduration=44.859861903 podStartE2EDuration="44.859861903s" podCreationTimestamp="2025-09-12 17:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:37.833304889 +0000 UTC m=+49.671497210" watchObservedRunningTime="2025-09-12 17:43:37.859861903 +0000 UTC m=+49.698054220" Sep 12 17:43:37.879150 systemd[1]: Started cri-containerd-1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de.scope - libcontainer container 1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de. Sep 12 17:43:38.083930 containerd[1909]: time="2025-09-12T17:43:38.083839220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghf2w,Uid:a5151dfa-6356-4409-97e2-1942a154fe88,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de\"" Sep 12 17:43:38.630514 systemd[1]: Started sshd@9-172.31.19.53:22-139.178.68.195:49582.service - OpenSSH per-connection server daemon (139.178.68.195:49582). Sep 12 17:43:38.855788 systemd-networkd[1813]: calibad42879e39: Gained IPv6LL Sep 12 17:43:38.856193 systemd-networkd[1813]: cali60a75a350a8: Gained IPv6LL Sep 12 17:43:38.882265 sshd[5717]: Accepted publickey for core from 139.178.68.195 port 49582 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:38.888861 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:38.901834 systemd-logind[1861]: New session 10 of user core. Sep 12 17:43:38.907123 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:43:38.983527 systemd-networkd[1813]: calieb6e530864e: Gained IPv6LL Sep 12 17:43:39.111287 systemd-networkd[1813]: cali639963c7542: Gained IPv6LL Sep 12 17:43:40.050206 sshd[5721]: Connection closed by 139.178.68.195 port 49582 Sep 12 17:43:40.050874 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:40.069533 systemd[1]: sshd@9-172.31.19.53:22-139.178.68.195:49582.service: Deactivated successfully. Sep 12 17:43:40.075381 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:43:40.079188 systemd-logind[1861]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:43:40.082800 systemd-logind[1861]: Removed session 10. Sep 12 17:43:40.557464 containerd[1909]: time="2025-09-12T17:43:40.557366633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:40.558494 containerd[1909]: time="2025-09-12T17:43:40.558385247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:43:40.560146 containerd[1909]: time="2025-09-12T17:43:40.560111587Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:40.563704 containerd[1909]: time="2025-09-12T17:43:40.563629392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:40.564347 containerd[1909]: time="2025-09-12T17:43:40.564096237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.477415629s" Sep 12 17:43:40.564347 containerd[1909]: time="2025-09-12T17:43:40.564131224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:43:40.565961 containerd[1909]: time="2025-09-12T17:43:40.565823607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:43:40.568542 containerd[1909]: time="2025-09-12T17:43:40.568490566Z" level=info msg="CreateContainer within sandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:43:40.578959 containerd[1909]: time="2025-09-12T17:43:40.578783688Z" level=info msg="Container 09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:40.600450 containerd[1909]: time="2025-09-12T17:43:40.600389774Z" level=info msg="CreateContainer within sandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\"" Sep 12 17:43:40.601719 containerd[1909]: time="2025-09-12T17:43:40.601672715Z" level=info msg="StartContainer for \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\"" Sep 12 17:43:40.604311 containerd[1909]: time="2025-09-12T17:43:40.604067326Z" level=info msg="connecting to shim 09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967" address="unix:///run/containerd/s/c20a71d0c6a7f17e491b0edcf5aa19981df6a0b84db1fb0778d067f2676beb90" protocol=ttrpc version=3 Sep 12 17:43:40.641285 systemd[1]: Started cri-containerd-09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967.scope - libcontainer container 09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967. Sep 12 17:43:40.762788 containerd[1909]: time="2025-09-12T17:43:40.762714000Z" level=info msg="StartContainer for \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" returns successfully" Sep 12 17:43:41.006764 containerd[1909]: time="2025-09-12T17:43:41.006697796Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:41.008787 containerd[1909]: time="2025-09-12T17:43:41.008184495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:43:41.010259 containerd[1909]: time="2025-09-12T17:43:41.010222091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 444.251485ms" Sep 12 17:43:41.010365 containerd[1909]: time="2025-09-12T17:43:41.010260792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:43:41.013735 containerd[1909]: time="2025-09-12T17:43:41.013704313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:43:41.017988 containerd[1909]: time="2025-09-12T17:43:41.017846348Z" level=info msg="CreateContainer within sandbox \"11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:43:41.030231 containerd[1909]: time="2025-09-12T17:43:41.029131871Z" level=info msg="Container dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:41.046898 containerd[1909]: time="2025-09-12T17:43:41.046851392Z" level=info msg="CreateContainer within sandbox \"11f093b4e085e05b5a89317bd70c0596c745a23a900d70c7c9ec7274a161d8b0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18\"" Sep 12 17:43:41.047638 containerd[1909]: time="2025-09-12T17:43:41.047603332Z" level=info msg="StartContainer for \"dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18\"" Sep 12 17:43:41.050462 containerd[1909]: time="2025-09-12T17:43:41.049084670Z" level=info msg="connecting to shim dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18" address="unix:///run/containerd/s/7313652e2abb84ef6df58aea6cf368723da31e77c2aaa34a1f9b68e4fb198295" protocol=ttrpc version=3 Sep 12 17:43:41.079412 systemd[1]: Started cri-containerd-dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18.scope - libcontainer container dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18. Sep 12 17:43:41.165131 containerd[1909]: time="2025-09-12T17:43:41.164809920Z" level=info msg="StartContainer for \"dd10f80845103a689e3f2e6ed27e5eb7529fee503a3228c51983a5529d5e0c18\" returns successfully" Sep 12 17:43:41.211006 ntpd[1851]: Listen normally on 8 vxlan.calico 192.168.102.128:123 Sep 12 17:43:41.211102 ntpd[1851]: Listen normally on 9 cali28721ee813f [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 8 vxlan.calico 192.168.102.128:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 9 cali28721ee813f [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 10 vxlan.calico [fe80::64e5:52ff:fedf:49c5%5]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 11 cali3b2fe3b4e85 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 12 cali4a83b8cb087 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 13 calid62cd2801a0 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 14 cali50f07b9a552 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 15 calibad42879e39 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 16 cali60a75a350a8 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 17 calieb6e530864e [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:43:41.211544 ntpd[1851]: 12 Sep 17:43:41 ntpd[1851]: Listen normally on 18 cali639963c7542 [fe80::ecee:eeff:feee:eeee%15]:123 Sep 12 17:43:41.211159 ntpd[1851]: Listen normally on 10 vxlan.calico [fe80::64e5:52ff:fedf:49c5%5]:123 Sep 12 17:43:41.211198 ntpd[1851]: Listen normally on 11 cali3b2fe3b4e85 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:43:41.211240 ntpd[1851]: Listen normally on 12 cali4a83b8cb087 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:43:41.211277 ntpd[1851]: Listen normally on 13 calid62cd2801a0 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:43:41.211317 ntpd[1851]: Listen normally on 14 cali50f07b9a552 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:43:41.211351 ntpd[1851]: Listen normally on 15 calibad42879e39 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:43:41.211387 ntpd[1851]: Listen normally on 16 cali60a75a350a8 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:43:41.211421 ntpd[1851]: Listen normally on 17 calieb6e530864e [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:43:41.211456 ntpd[1851]: Listen normally on 18 cali639963c7542 [fe80::ecee:eeff:feee:eeee%15]:123 Sep 12 17:43:41.852338 kubelet[3325]: I0912 17:43:41.852271 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:41.873940 kubelet[3325]: I0912 17:43:41.873569 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77fc447859-2slhn" podStartSLOduration=31.431826856 podStartE2EDuration="37.873545506s" podCreationTimestamp="2025-09-12 17:43:04 +0000 UTC" firstStartedPulling="2025-09-12 17:43:34.123798179 +0000 UTC m=+45.961990495" lastFinishedPulling="2025-09-12 17:43:40.565516846 +0000 UTC m=+52.403709145" observedRunningTime="2025-09-12 17:43:40.849303245 +0000 UTC m=+52.687495566" watchObservedRunningTime="2025-09-12 17:43:41.873545506 +0000 UTC m=+53.711737823" Sep 12 17:43:41.874786 kubelet[3325]: I0912 17:43:41.874722 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bffdd5bf7-slkqb" podStartSLOduration=30.835093592 podStartE2EDuration="36.874703367s" podCreationTimestamp="2025-09-12 17:43:05 +0000 UTC" firstStartedPulling="2025-09-12 17:43:34.973198207 +0000 UTC m=+46.811390514" lastFinishedPulling="2025-09-12 17:43:41.012807975 +0000 UTC m=+52.851000289" observedRunningTime="2025-09-12 17:43:41.87348989 +0000 UTC m=+53.711682215" watchObservedRunningTime="2025-09-12 17:43:41.874703367 +0000 UTC m=+53.712895688" Sep 12 17:43:43.790204 systemd[1]: Created slice kubepods-besteffort-podbcd461d1_8a86_4dd8_94bf_7816b11eef98.slice - libcontainer container kubepods-besteffort-podbcd461d1_8a86_4dd8_94bf_7816b11eef98.slice. Sep 12 17:43:43.900486 kubelet[3325]: I0912 17:43:43.900413 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lwqv\" (UniqueName: \"kubernetes.io/projected/bcd461d1-8a86-4dd8-94bf-7816b11eef98-kube-api-access-4lwqv\") pod \"calico-apiserver-6bffdd5bf7-8hfmw\" (UID: \"bcd461d1-8a86-4dd8-94bf-7816b11eef98\") " pod="calico-apiserver/calico-apiserver-6bffdd5bf7-8hfmw" Sep 12 17:43:43.901115 kubelet[3325]: I0912 17:43:43.900536 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bcd461d1-8a86-4dd8-94bf-7816b11eef98-calico-apiserver-certs\") pod \"calico-apiserver-6bffdd5bf7-8hfmw\" (UID: \"bcd461d1-8a86-4dd8-94bf-7816b11eef98\") " pod="calico-apiserver/calico-apiserver-6bffdd5bf7-8hfmw" Sep 12 17:43:44.103330 containerd[1909]: time="2025-09-12T17:43:44.103198134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffdd5bf7-8hfmw,Uid:bcd461d1-8a86-4dd8-94bf-7816b11eef98,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:43:44.407787 systemd-networkd[1813]: calic3cb932c484: Link UP Sep 12 17:43:44.409777 systemd-networkd[1813]: calic3cb932c484: Gained carrier Sep 12 17:43:44.416880 (udev-worker)[5852]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.257 [INFO][5833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0 calico-apiserver-6bffdd5bf7- calico-apiserver bcd461d1-8a86-4dd8-94bf-7816b11eef98 1102 0 2025-09-12 17:43:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bffdd5bf7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-53 calico-apiserver-6bffdd5bf7-8hfmw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic3cb932c484 [] [] }} ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.259 [INFO][5833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.328 [INFO][5844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" HandleID="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Workload="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.328 [INFO][5844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" HandleID="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Workload="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-53", "pod":"calico-apiserver-6bffdd5bf7-8hfmw", "timestamp":"2025-09-12 17:43:44.328533122 +0000 UTC"}, Hostname:"ip-172-31-19-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.328 [INFO][5844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.328 [INFO][5844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.328 [INFO][5844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-53' Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.338 [INFO][5844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.358 [INFO][5844] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.366 [INFO][5844] ipam/ipam.go 511: Trying affinity for 192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.370 [INFO][5844] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.374 [INFO][5844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.374 [INFO][5844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.376 [INFO][5844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619 Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.383 [INFO][5844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.396 [INFO][5844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.138/26] block=192.168.102.128/26 handle="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.396 [INFO][5844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.138/26] handle="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" host="ip-172-31-19-53" Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.397 [INFO][5844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:44.456821 containerd[1909]: 2025-09-12 17:43:44.397 [INFO][5844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.138/26] IPv6=[] ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" HandleID="k8s-pod-network.6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Workload="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.457773 containerd[1909]: 2025-09-12 17:43:44.401 [INFO][5833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0", GenerateName:"calico-apiserver-6bffdd5bf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcd461d1-8a86-4dd8-94bf-7816b11eef98", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffdd5bf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"", Pod:"calico-apiserver-6bffdd5bf7-8hfmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3cb932c484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:44.457773 containerd[1909]: 2025-09-12 17:43:44.402 [INFO][5833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.138/32] ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.457773 containerd[1909]: 2025-09-12 17:43:44.402 [INFO][5833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3cb932c484 ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.457773 containerd[1909]: 2025-09-12 17:43:44.410 [INFO][5833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.457773 containerd[1909]: 2025-09-12 17:43:44.415 [INFO][5833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0", GenerateName:"calico-apiserver-6bffdd5bf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcd461d1-8a86-4dd8-94bf-7816b11eef98", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffdd5bf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-53", ContainerID:"6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619", Pod:"calico-apiserver-6bffdd5bf7-8hfmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3cb932c484", MAC:"32:37:6f:45:0d:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:44.457773 containerd[1909]: 2025-09-12 17:43:44.442 [INFO][5833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" Namespace="calico-apiserver" Pod="calico-apiserver-6bffdd5bf7-8hfmw" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--6bffdd5bf7--8hfmw-eth0" Sep 12 17:43:44.608097 containerd[1909]: time="2025-09-12T17:43:44.608041711Z" level=info msg="connecting to shim 6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619" address="unix:///run/containerd/s/1c8f9b4c7a04af1ac8ece66298462ca52c2d0dea679483567f7796a629b385e6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:44.672202 systemd[1]: Started cri-containerd-6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619.scope - libcontainer container 6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619. Sep 12 17:43:44.752614 containerd[1909]: time="2025-09-12T17:43:44.752551398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffdd5bf7-8hfmw,Uid:bcd461d1-8a86-4dd8-94bf-7816b11eef98,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619\"" Sep 12 17:43:44.760394 containerd[1909]: time="2025-09-12T17:43:44.760346804Z" level=info msg="CreateContainer within sandbox \"6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:43:44.797018 containerd[1909]: time="2025-09-12T17:43:44.795951866Z" level=info msg="Container 42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:44.810460 containerd[1909]: time="2025-09-12T17:43:44.810405464Z" level=info msg="CreateContainer within sandbox \"6df2d155e1a114bcad96018452b04595279691a8e269558ff5ca02aae0051619\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473\"" Sep 12 17:43:44.816222 containerd[1909]: time="2025-09-12T17:43:44.816175303Z" level=info msg="StartContainer for \"42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473\"" Sep 12 17:43:44.818373 containerd[1909]: time="2025-09-12T17:43:44.818331474Z" level=info msg="connecting to shim 42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473" address="unix:///run/containerd/s/1c8f9b4c7a04af1ac8ece66298462ca52c2d0dea679483567f7796a629b385e6" protocol=ttrpc version=3 Sep 12 17:43:44.845314 systemd[1]: Started cri-containerd-42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473.scope - libcontainer container 42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473. Sep 12 17:43:44.930821 containerd[1909]: time="2025-09-12T17:43:44.930336400Z" level=info msg="StartContainer for \"42adb3a55ea23459c70ff77f3c73882315c3da0542bc73afc55c350f786af473\" returns successfully" Sep 12 17:43:45.091159 systemd[1]: Started sshd@10-172.31.19.53:22-139.178.68.195:41792.service - OpenSSH per-connection server daemon (139.178.68.195:41792). Sep 12 17:43:45.389342 sshd[5943]: Accepted publickey for core from 139.178.68.195 port 41792 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:45.395170 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:45.406159 systemd-logind[1861]: New session 11 of user core. Sep 12 17:43:45.416160 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:43:46.407370 systemd-networkd[1813]: calic3cb932c484: Gained IPv6LL Sep 12 17:43:46.613022 sshd[5947]: Connection closed by 139.178.68.195 port 41792 Sep 12 17:43:46.614066 sshd-session[5943]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:46.638007 systemd[1]: sshd@10-172.31.19.53:22-139.178.68.195:41792.service: Deactivated successfully. Sep 12 17:43:46.642689 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:43:46.646595 systemd-logind[1861]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:43:46.650966 systemd-logind[1861]: Removed session 11. Sep 12 17:43:46.893240 kubelet[3325]: I0912 17:43:46.886100 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:48.254178 containerd[1909]: time="2025-09-12T17:43:48.254126738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:48.281599 containerd[1909]: time="2025-09-12T17:43:48.281484648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:43:48.462194 containerd[1909]: time="2025-09-12T17:43:48.461957954Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:48.469164 containerd[1909]: time="2025-09-12T17:43:48.469117536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:48.476750 containerd[1909]: time="2025-09-12T17:43:48.476565495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.462815017s" Sep 12 17:43:48.476750 containerd[1909]: time="2025-09-12T17:43:48.476635094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:43:49.051976 containerd[1909]: time="2025-09-12T17:43:49.051593324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:43:49.195558 containerd[1909]: time="2025-09-12T17:43:49.195156436Z" level=info msg="CreateContainer within sandbox \"68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:43:49.211041 ntpd[1851]: Listen normally on 19 calic3cb932c484 [fe80::ecee:eeff:feee:eeee%16]:123 Sep 12 17:43:49.215849 ntpd[1851]: 12 Sep 17:43:49 ntpd[1851]: Listen normally on 19 calic3cb932c484 [fe80::ecee:eeff:feee:eeee%16]:123 Sep 12 17:43:49.248658 containerd[1909]: time="2025-09-12T17:43:49.248616104Z" level=info msg="Container eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:49.255936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479960606.mount: Deactivated successfully. Sep 12 17:43:49.321867 containerd[1909]: time="2025-09-12T17:43:49.321701062Z" level=info msg="CreateContainer within sandbox \"68fcad113fa3d9497a22b45a61b7be8146174426862d7a3f7222266bdb8e8068\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\"" Sep 12 17:43:49.384153 containerd[1909]: time="2025-09-12T17:43:49.383909003Z" level=info msg="StartContainer for \"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\"" Sep 12 17:43:49.390769 containerd[1909]: time="2025-09-12T17:43:49.390722183Z" level=info msg="connecting to shim eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d" address="unix:///run/containerd/s/9300199ea7b094666a0763ec2586f94208999362a01489eea02eabd11c557aed" protocol=ttrpc version=3 Sep 12 17:43:49.618616 systemd[1]: Started cri-containerd-eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d.scope - libcontainer container eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d. Sep 12 17:43:49.833294 containerd[1909]: time="2025-09-12T17:43:49.833152660Z" level=info msg="StartContainer for \"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\" returns successfully" Sep 12 17:43:50.523068 kubelet[3325]: I0912 17:43:50.343498 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bffdd5bf7-8hfmw" podStartSLOduration=7.323074934 podStartE2EDuration="7.323074934s" podCreationTimestamp="2025-09-12 17:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:45.92172291 +0000 UTC m=+57.759915234" watchObservedRunningTime="2025-09-12 17:43:50.323074934 +0000 UTC m=+62.161267254" Sep 12 17:43:50.541823 kubelet[3325]: I0912 17:43:50.541745 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55b4977774-rqh99" podStartSLOduration=28.802327941 podStartE2EDuration="41.541721552s" podCreationTimestamp="2025-09-12 17:43:09 +0000 UTC" firstStartedPulling="2025-09-12 17:43:36.205819064 +0000 UTC m=+48.044011378" lastFinishedPulling="2025-09-12 17:43:48.945212674 +0000 UTC m=+60.783404989" observedRunningTime="2025-09-12 17:43:50.537914407 +0000 UTC m=+62.376106718" watchObservedRunningTime="2025-09-12 17:43:50.541721552 +0000 UTC m=+62.379913874" Sep 12 17:43:50.654822 containerd[1909]: time="2025-09-12T17:43:50.654766942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\" id:\"cfcceba04462d8c3602e2002418a8342ade93080fb866cf8dcf9bb0b81bf5543\" pid:6026 exit_status:1 exited_at:{seconds:1757699030 nanos:615593910}" Sep 12 17:43:51.187271 containerd[1909]: time="2025-09-12T17:43:51.187227296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\" id:\"c4743eafa960ad4d8ec7a8a45a52f18180598c007aa4222ab7438f9365b531a5\" pid:6051 exited_at:{seconds:1757699031 nanos:186836367}" Sep 12 17:43:51.658830 systemd[1]: Started sshd@11-172.31.19.53:22-139.178.68.195:35860.service - OpenSSH per-connection server daemon (139.178.68.195:35860). Sep 12 17:43:51.929243 sshd[6061]: Accepted publickey for core from 139.178.68.195 port 35860 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:51.931793 sshd-session[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:51.939509 systemd-logind[1861]: New session 12 of user core. Sep 12 17:43:51.948664 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:43:53.203152 sshd[6064]: Connection closed by 139.178.68.195 port 35860 Sep 12 17:43:53.206013 sshd-session[6061]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:53.214726 systemd[1]: sshd@11-172.31.19.53:22-139.178.68.195:35860.service: Deactivated successfully. Sep 12 17:43:53.219907 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:43:53.223442 systemd-logind[1861]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:43:53.242183 systemd[1]: Started sshd@12-172.31.19.53:22-139.178.68.195:35864.service - OpenSSH per-connection server daemon (139.178.68.195:35864). Sep 12 17:43:53.245660 systemd-logind[1861]: Removed session 12. Sep 12 17:43:53.466315 sshd[6083]: Accepted publickey for core from 139.178.68.195 port 35864 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:53.469163 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:53.481511 systemd-logind[1861]: New session 13 of user core. Sep 12 17:43:53.487636 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:43:54.026871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431617982.mount: Deactivated successfully. Sep 12 17:43:54.135588 containerd[1909]: time="2025-09-12T17:43:54.135534533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:54.154095 containerd[1909]: time="2025-09-12T17:43:54.153974143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:43:54.175461 containerd[1909]: time="2025-09-12T17:43:54.175416465Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:54.183470 containerd[1909]: time="2025-09-12T17:43:54.183399538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:54.196947 containerd[1909]: time="2025-09-12T17:43:54.196851930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.132532392s" Sep 12 17:43:54.196947 containerd[1909]: time="2025-09-12T17:43:54.196948170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:43:54.213213 sshd[6086]: Connection closed by 139.178.68.195 port 35864 Sep 12 17:43:54.217215 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:54.226234 systemd[1]: sshd@12-172.31.19.53:22-139.178.68.195:35864.service: Deactivated successfully. Sep 12 17:43:54.231454 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:43:54.236373 systemd-logind[1861]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:43:54.264401 systemd[1]: Started sshd@13-172.31.19.53:22-139.178.68.195:35874.service - OpenSSH per-connection server daemon (139.178.68.195:35874). Sep 12 17:43:54.270451 systemd-logind[1861]: Removed session 13. Sep 12 17:43:54.315233 containerd[1909]: time="2025-09-12T17:43:54.314948876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:43:54.437302 containerd[1909]: time="2025-09-12T17:43:54.437178343Z" level=info msg="CreateContainer within sandbox \"01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:43:54.518184 containerd[1909]: time="2025-09-12T17:43:54.518142674Z" level=info msg="Container 9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:54.529462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161632582.mount: Deactivated successfully. Sep 12 17:43:54.633999 containerd[1909]: time="2025-09-12T17:43:54.633576779Z" level=info msg="CreateContainer within sandbox \"01ef8cf6d84df6763de3598b3810c8e486e14c7e4f7229b1577b64166e8740b6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b\"" Sep 12 17:43:54.640107 sshd[6101]: Accepted publickey for core from 139.178.68.195 port 35874 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:54.647185 sshd-session[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:54.666979 systemd-logind[1861]: New session 14 of user core. Sep 12 17:43:54.673566 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:43:54.692143 containerd[1909]: time="2025-09-12T17:43:54.690760359Z" level=info msg="StartContainer for \"9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b\"" Sep 12 17:43:54.701068 containerd[1909]: time="2025-09-12T17:43:54.700806679Z" level=info msg="connecting to shim 9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b" address="unix:///run/containerd/s/0a5ec839e23672f5aea9a62e1bc06bb0e95b112c890ccaa91d76f67c9d84099a" protocol=ttrpc version=3 Sep 12 17:43:54.793293 containerd[1909]: time="2025-09-12T17:43:54.793232619Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:54.798483 containerd[1909]: time="2025-09-12T17:43:54.798402297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:43:54.806726 containerd[1909]: time="2025-09-12T17:43:54.806675517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 491.676014ms" Sep 12 17:43:54.806726 containerd[1909]: time="2025-09-12T17:43:54.806728913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:43:54.814356 containerd[1909]: time="2025-09-12T17:43:54.812689989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:43:54.829020 containerd[1909]: time="2025-09-12T17:43:54.828299238Z" level=info msg="CreateContainer within sandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:43:54.858100 containerd[1909]: time="2025-09-12T17:43:54.858050951Z" level=info msg="Container 0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:54.915370 systemd[1]: Started cri-containerd-9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b.scope - libcontainer container 9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b. Sep 12 17:43:54.942276 containerd[1909]: time="2025-09-12T17:43:54.942209143Z" level=info msg="CreateContainer within sandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\"" Sep 12 17:43:55.032761 containerd[1909]: time="2025-09-12T17:43:55.032718343Z" level=info msg="StartContainer for \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\"" Sep 12 17:43:55.046831 containerd[1909]: time="2025-09-12T17:43:55.046282618Z" level=info msg="connecting to shim 0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1" address="unix:///run/containerd/s/8b4e5ecc940cfccd377fb164120c74ceed8a058242f1ce3440b707686cc6a589" protocol=ttrpc version=3 Sep 12 17:43:55.188391 systemd[1]: Started cri-containerd-0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1.scope - libcontainer container 0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1. Sep 12 17:43:55.200037 containerd[1909]: time="2025-09-12T17:43:55.197632409Z" level=info msg="StartContainer for \"9b5a02b42d14a4df70cb05722f2cbce83804fea849a04c0b27701bacf467466b\" returns successfully" Sep 12 17:43:55.417793 sshd[6112]: Connection closed by 139.178.68.195 port 35874 Sep 12 17:43:55.417550 sshd-session[6101]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:55.436418 systemd[1]: sshd@13-172.31.19.53:22-139.178.68.195:35874.service: Deactivated successfully. Sep 12 17:43:55.441997 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:43:55.450039 systemd-logind[1861]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:43:55.451866 systemd-logind[1861]: Removed session 14. Sep 12 17:43:55.474613 containerd[1909]: time="2025-09-12T17:43:55.474561396Z" level=info msg="StartContainer for \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" returns successfully" Sep 12 17:43:55.792649 kubelet[3325]: I0912 17:43:55.792495 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77fc447859-l6lrq" podStartSLOduration=34.471712797 podStartE2EDuration="51.77488106s" podCreationTimestamp="2025-09-12 17:43:04 +0000 UTC" firstStartedPulling="2025-09-12 17:43:37.508246579 +0000 UTC m=+49.346438892" lastFinishedPulling="2025-09-12 17:43:54.811414845 +0000 UTC m=+66.649607155" observedRunningTime="2025-09-12 17:43:55.705980036 +0000 UTC m=+67.544172358" watchObservedRunningTime="2025-09-12 17:43:55.77488106 +0000 UTC m=+67.613073382" Sep 12 17:43:55.807201 containerd[1909]: time="2025-09-12T17:43:55.807014725Z" level=info msg="StopContainer for \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" with timeout 30 (s)" Sep 12 17:43:55.820802 kubelet[3325]: I0912 17:43:55.819964 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-696cfc44-m4v6j" podStartSLOduration=4.556833546 podStartE2EDuration="24.819893436s" podCreationTimestamp="2025-09-12 17:43:31 +0000 UTC" firstStartedPulling="2025-09-12 17:43:34.026119703 +0000 UTC m=+45.864312003" lastFinishedPulling="2025-09-12 17:43:54.289179578 +0000 UTC m=+66.127371893" observedRunningTime="2025-09-12 17:43:55.809278149 +0000 UTC m=+67.647470469" watchObservedRunningTime="2025-09-12 17:43:55.819893436 +0000 UTC m=+67.658085756" Sep 12 17:43:55.825080 containerd[1909]: time="2025-09-12T17:43:55.825029332Z" level=info msg="Stop container \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" with signal terminated" Sep 12 17:43:55.860026 systemd[1]: cri-containerd-0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1.scope: Deactivated successfully. Sep 12 17:43:55.860425 systemd[1]: cri-containerd-0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1.scope: Consumed 102ms CPU time, 17.2M memory peak, 8.5M read from disk. Sep 12 17:43:55.886881 containerd[1909]: time="2025-09-12T17:43:55.886098319Z" level=info msg="received exit event container_id:\"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" id:\"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" pid:6164 exit_status:2 exited_at:{seconds:1757699035 nanos:882349080}" Sep 12 17:43:55.949126 containerd[1909]: time="2025-09-12T17:43:55.949068460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" id:\"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" pid:6164 exit_status:2 exited_at:{seconds:1757699035 nanos:882349080}" Sep 12 17:43:55.978441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1-rootfs.mount: Deactivated successfully. Sep 12 17:43:56.066982 containerd[1909]: time="2025-09-12T17:43:56.066638907Z" level=info msg="StopContainer for \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" returns successfully" Sep 12 17:43:56.077795 containerd[1909]: time="2025-09-12T17:43:56.077738344Z" level=info msg="StopPodSandbox for \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\"" Sep 12 17:43:56.092750 containerd[1909]: time="2025-09-12T17:43:56.092695470Z" level=info msg="Container to stop \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:56.122247 systemd[1]: cri-containerd-8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb.scope: Deactivated successfully. Sep 12 17:43:56.133070 containerd[1909]: time="2025-09-12T17:43:56.132731856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" id:\"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" pid:5536 exit_status:137 exited_at:{seconds:1757699036 nanos:132328360}" Sep 12 17:43:56.180289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb-rootfs.mount: Deactivated successfully. Sep 12 17:43:56.198609 containerd[1909]: time="2025-09-12T17:43:56.198169407Z" level=info msg="shim disconnected" id=8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb namespace=k8s.io Sep 12 17:43:56.198609 containerd[1909]: time="2025-09-12T17:43:56.198215666Z" level=warning msg="cleaning up after shim disconnected" id=8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb namespace=k8s.io Sep 12 17:43:56.233543 containerd[1909]: time="2025-09-12T17:43:56.198229125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:56.350791 containerd[1909]: time="2025-09-12T17:43:56.350669105Z" level=info msg="received exit event sandbox_id:\"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" exit_status:137 exited_at:{seconds:1757699036 nanos:132328360}" Sep 12 17:43:56.359823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb-shm.mount: Deactivated successfully. Sep 12 17:43:56.719392 kubelet[3325]: I0912 17:43:56.719313 3325 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:43:56.954561 systemd-networkd[1813]: cali60a75a350a8: Link DOWN Sep 12 17:43:56.954571 systemd-networkd[1813]: cali60a75a350a8: Lost carrier Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:56.923 [INFO][6260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:56.926 [INFO][6260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" iface="eth0" netns="/var/run/netns/cni-483825ef-0d34-eddd-59a5-53e0f6f60414" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:56.926 [INFO][6260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" iface="eth0" netns="/var/run/netns/cni-483825ef-0d34-eddd-59a5-53e0f6f60414" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:56.954 [INFO][6260] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" after=27.961715ms iface="eth0" netns="/var/run/netns/cni-483825ef-0d34-eddd-59a5-53e0f6f60414" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:56.954 [INFO][6260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:56.954 [INFO][6260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.362 [INFO][6271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.368 [INFO][6271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.368 [INFO][6271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.484 [INFO][6271] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.484 [INFO][6271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.487 [INFO][6271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:57.504383 containerd[1909]: 2025-09-12 17:43:57.492 [INFO][6260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:43:57.509675 containerd[1909]: time="2025-09-12T17:43:57.509463118Z" level=info msg="TearDown network for sandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" successfully" Sep 12 17:43:57.509675 containerd[1909]: time="2025-09-12T17:43:57.509500547Z" level=info msg="StopPodSandbox for \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" returns successfully" Sep 12 17:43:57.516306 systemd[1]: run-netns-cni\x2d483825ef\x2d0d34\x2deddd\x2d59a5\x2d53e0f6f60414.mount: Deactivated successfully. Sep 12 17:43:57.840554 kubelet[3325]: I0912 17:43:57.839894 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7b7h\" (UniqueName: \"kubernetes.io/projected/14280669-80b2-47e3-8948-1a09fa3ad5a8-kube-api-access-d7b7h\") pod \"14280669-80b2-47e3-8948-1a09fa3ad5a8\" (UID: \"14280669-80b2-47e3-8948-1a09fa3ad5a8\") " Sep 12 17:43:57.840554 kubelet[3325]: I0912 17:43:57.840044 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14280669-80b2-47e3-8948-1a09fa3ad5a8-calico-apiserver-certs\") pod \"14280669-80b2-47e3-8948-1a09fa3ad5a8\" (UID: \"14280669-80b2-47e3-8948-1a09fa3ad5a8\") " Sep 12 17:43:57.908610 systemd[1]: var-lib-kubelet-pods-14280669\x2d80b2\x2d47e3\x2d8948\x2d1a09fa3ad5a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7b7h.mount: Deactivated successfully. Sep 12 17:43:57.914548 systemd[1]: var-lib-kubelet-pods-14280669\x2d80b2\x2d47e3\x2d8948\x2d1a09fa3ad5a8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:43:57.917312 kubelet[3325]: I0912 17:43:57.912678 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14280669-80b2-47e3-8948-1a09fa3ad5a8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "14280669-80b2-47e3-8948-1a09fa3ad5a8" (UID: "14280669-80b2-47e3-8948-1a09fa3ad5a8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:43:57.917413 kubelet[3325]: I0912 17:43:57.912717 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14280669-80b2-47e3-8948-1a09fa3ad5a8-kube-api-access-d7b7h" (OuterVolumeSpecName: "kube-api-access-d7b7h") pod "14280669-80b2-47e3-8948-1a09fa3ad5a8" (UID: "14280669-80b2-47e3-8948-1a09fa3ad5a8"). InnerVolumeSpecName "kube-api-access-d7b7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:43:57.949254 kubelet[3325]: I0912 17:43:57.948650 3325 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14280669-80b2-47e3-8948-1a09fa3ad5a8-calico-apiserver-certs\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:43:57.949254 kubelet[3325]: I0912 17:43:57.948700 3325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7b7h\" (UniqueName: \"kubernetes.io/projected/14280669-80b2-47e3-8948-1a09fa3ad5a8-kube-api-access-d7b7h\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:43:58.046974 systemd[1]: Removed slice kubepods-besteffort-pod14280669_80b2_47e3_8948_1a09fa3ad5a8.slice - libcontainer container kubepods-besteffort-pod14280669_80b2_47e3_8948_1a09fa3ad5a8.slice. Sep 12 17:43:58.047157 systemd[1]: kubepods-besteffort-pod14280669_80b2_47e3_8948_1a09fa3ad5a8.slice: Consumed 144ms CPU time, 17.5M memory peak, 8.5M read from disk. Sep 12 17:43:58.339978 kubelet[3325]: I0912 17:43:58.333217 3325 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14280669-80b2-47e3-8948-1a09fa3ad5a8" path="/var/lib/kubelet/pods/14280669-80b2-47e3-8948-1a09fa3ad5a8/volumes" Sep 12 17:43:59.211082 ntpd[1851]: Deleting interface #16 cali60a75a350a8, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Sep 12 17:43:59.214494 ntpd[1851]: 12 Sep 17:43:59 ntpd[1851]: Deleting interface #16 cali60a75a350a8, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Sep 12 17:43:59.587867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791531604.mount: Deactivated successfully. Sep 12 17:44:00.462215 systemd[1]: Started sshd@14-172.31.19.53:22-139.178.68.195:39894.service - OpenSSH per-connection server daemon (139.178.68.195:39894). Sep 12 17:44:00.776937 sshd[6299]: Accepted publickey for core from 139.178.68.195 port 39894 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:00.780502 sshd-session[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:00.791133 systemd-logind[1861]: New session 15 of user core. Sep 12 17:44:00.796329 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:44:01.324904 containerd[1909]: time="2025-09-12T17:44:01.323415097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:44:01.533698 containerd[1909]: time="2025-09-12T17:44:01.533593718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:01.713888 containerd[1909]: time="2025-09-12T17:44:01.713642151Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:02.088423 containerd[1909]: time="2025-09-12T17:44:02.087153026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:02.088423 containerd[1909]: time="2025-09-12T17:44:02.087797657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 7.271994265s" Sep 12 17:44:02.088423 containerd[1909]: time="2025-09-12T17:44:02.087836605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:44:02.333027 containerd[1909]: time="2025-09-12T17:44:02.332947505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:44:02.703589 containerd[1909]: time="2025-09-12T17:44:02.703231623Z" level=info msg="CreateContainer within sandbox \"2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:44:02.860577 containerd[1909]: time="2025-09-12T17:44:02.850528061Z" level=info msg="Container 88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:03.289797 containerd[1909]: time="2025-09-12T17:44:03.288815101Z" level=info msg="CreateContainer within sandbox \"2c3080d1f67fdd8e0aef30db29a833a29d4fcc3ade41b4dcbd45eb68d71a861c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e\"" Sep 12 17:44:03.316904 containerd[1909]: time="2025-09-12T17:44:03.316861338Z" level=info msg="StartContainer for \"88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e\"" Sep 12 17:44:03.341716 containerd[1909]: time="2025-09-12T17:44:03.341550127Z" level=info msg="connecting to shim 88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e" address="unix:///run/containerd/s/bf48507f76c4f57f9cdbb534142ebdef7d1d1df65f8a6a6509dfa156c86b7d11" protocol=ttrpc version=3 Sep 12 17:44:03.980618 systemd[1]: Started cri-containerd-88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e.scope - libcontainer container 88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e. Sep 12 17:44:04.230314 containerd[1909]: time="2025-09-12T17:44:04.230274590Z" level=info msg="StartContainer for \"88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e\" returns successfully" Sep 12 17:44:04.294252 sshd[6302]: Connection closed by 139.178.68.195 port 39894 Sep 12 17:44:04.294735 sshd-session[6299]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:04.333178 systemd[1]: sshd@14-172.31.19.53:22-139.178.68.195:39894.service: Deactivated successfully. Sep 12 17:44:04.358325 systemd-logind[1861]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:44:04.366826 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:44:04.380751 systemd-logind[1861]: Removed session 15. Sep 12 17:44:05.271203 containerd[1909]: time="2025-09-12T17:44:05.271064944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:05.273107 containerd[1909]: time="2025-09-12T17:44:05.272826941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:44:05.278874 containerd[1909]: time="2025-09-12T17:44:05.277843195Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:05.282730 containerd[1909]: time="2025-09-12T17:44:05.281686612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:05.282991 containerd[1909]: time="2025-09-12T17:44:05.282910297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.949919526s" Sep 12 17:44:05.283081 containerd[1909]: time="2025-09-12T17:44:05.282987956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:44:05.297674 containerd[1909]: time="2025-09-12T17:44:05.297632927Z" level=info msg="CreateContainer within sandbox \"1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:44:05.362270 containerd[1909]: time="2025-09-12T17:44:05.362223041Z" level=info msg="Container 10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:05.372625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211350487.mount: Deactivated successfully. Sep 12 17:44:05.380738 containerd[1909]: time="2025-09-12T17:44:05.380681435Z" level=info msg="CreateContainer within sandbox \"1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f\"" Sep 12 17:44:05.382731 containerd[1909]: time="2025-09-12T17:44:05.381862647Z" level=info msg="StartContainer for \"10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f\"" Sep 12 17:44:05.383713 containerd[1909]: time="2025-09-12T17:44:05.383673638Z" level=info msg="connecting to shim 10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f" address="unix:///run/containerd/s/5fe7be6f7a4e24ec59db3f9dbd049867452294dcb6bf9ff4487c9b2c1de36004" protocol=ttrpc version=3 Sep 12 17:44:05.417796 systemd[1]: Started cri-containerd-10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f.scope - libcontainer container 10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f. Sep 12 17:44:05.488180 containerd[1909]: time="2025-09-12T17:44:05.488127028Z" level=info msg="StartContainer for \"10879a5f1ab35e178cec368523ab188d9b0dc79589a737f36016d4aa9f4aca6f\" returns successfully" Sep 12 17:44:05.491544 containerd[1909]: time="2025-09-12T17:44:05.491476833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:44:05.717309 kubelet[3325]: I0912 17:44:05.704467 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-mp5hl" podStartSLOduration=33.317347758 podStartE2EDuration="57.69974263s" podCreationTimestamp="2025-09-12 17:43:08 +0000 UTC" firstStartedPulling="2025-09-12 17:43:37.758578275 +0000 UTC m=+49.596770583" lastFinishedPulling="2025-09-12 17:44:02.140973125 +0000 UTC m=+73.979165455" observedRunningTime="2025-09-12 17:44:05.688663153 +0000 UTC m=+77.526855468" watchObservedRunningTime="2025-09-12 17:44:05.69974263 +0000 UTC m=+77.537934950" Sep 12 17:44:07.344771 containerd[1909]: time="2025-09-12T17:44:07.344642508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e\" id:\"140d9d626f56ea4d79d4d81f63a83b42ceb55c16b6125f8bf5c4c330a621638b\" pid:6420 exit_status:1 exited_at:{seconds:1757699047 nanos:311867270}" Sep 12 17:44:07.368824 containerd[1909]: time="2025-09-12T17:44:07.368428366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\" id:\"7f0649ee6b36e2548de64ed6190b1c28b706913252552089fd7699f17c0ca32b\" pid:6402 exited_at:{seconds:1757699047 nanos:368115558}" Sep 12 17:44:07.762780 containerd[1909]: time="2025-09-12T17:44:07.761493527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e\" id:\"7bed3c1f29dcb97142d9bb7c9d10efdcdd64ce76c08cb2da5315c7ac99000b97\" pid:6455 exit_status:1 exited_at:{seconds:1757699047 nanos:759302185}" Sep 12 17:44:08.066890 containerd[1909]: time="2025-09-12T17:44:08.066762497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:08.069106 containerd[1909]: time="2025-09-12T17:44:08.069017088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:44:08.070904 containerd[1909]: time="2025-09-12T17:44:08.070761607Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:08.076678 containerd[1909]: time="2025-09-12T17:44:08.076556606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:08.078310 containerd[1909]: time="2025-09-12T17:44:08.077631635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.586084077s" Sep 12 17:44:08.078310 containerd[1909]: time="2025-09-12T17:44:08.077676475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:44:08.082842 containerd[1909]: time="2025-09-12T17:44:08.082675507Z" level=info msg="CreateContainer within sandbox \"1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:44:08.112973 containerd[1909]: time="2025-09-12T17:44:08.105744821Z" level=info msg="Container 4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:08.127295 containerd[1909]: time="2025-09-12T17:44:08.127250483Z" level=info msg="CreateContainer within sandbox \"1a25a66232992b9bd46f8fdc824523dc4e68fa6edbd4e607085053c24c38d7de\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc\"" Sep 12 17:44:08.129261 containerd[1909]: time="2025-09-12T17:44:08.129209205Z" level=info msg="StartContainer for \"4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc\"" Sep 12 17:44:08.131963 containerd[1909]: time="2025-09-12T17:44:08.131845334Z" level=info msg="connecting to shim 4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc" address="unix:///run/containerd/s/5fe7be6f7a4e24ec59db3f9dbd049867452294dcb6bf9ff4487c9b2c1de36004" protocol=ttrpc version=3 Sep 12 17:44:08.168175 systemd[1]: Started cri-containerd-4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc.scope - libcontainer container 4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc. Sep 12 17:44:08.294890 containerd[1909]: time="2025-09-12T17:44:08.294720697Z" level=info msg="StartContainer for \"4fcb3ce394559744232ea77b23f439364da75044c06042162952f972eafd7ecc\" returns successfully" Sep 12 17:44:08.787389 kubelet[3325]: I0912 17:44:08.787325 3325 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:44:08.787389 kubelet[3325]: I0912 17:44:08.787386 3325 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:44:09.338241 systemd[1]: Started sshd@15-172.31.19.53:22-139.178.68.195:39904.service - OpenSSH per-connection server daemon (139.178.68.195:39904). Sep 12 17:44:09.673039 sshd[6499]: Accepted publickey for core from 139.178.68.195 port 39904 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:09.675251 sshd-session[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:09.682476 systemd-logind[1861]: New session 16 of user core. Sep 12 17:44:09.690237 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:44:11.365005 kubelet[3325]: I0912 17:44:11.363724 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:44:12.224888 sshd[6502]: Connection closed by 139.178.68.195 port 39904 Sep 12 17:44:12.230224 sshd-session[6499]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:12.255061 systemd[1]: sshd@15-172.31.19.53:22-139.178.68.195:39904.service: Deactivated successfully. Sep 12 17:44:12.255526 systemd-logind[1861]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:44:12.263413 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:44:12.285844 systemd[1]: Started sshd@16-172.31.19.53:22-139.178.68.195:40794.service - OpenSSH per-connection server daemon (139.178.68.195:40794). Sep 12 17:44:12.289886 systemd-logind[1861]: Removed session 16. Sep 12 17:44:12.628819 sshd[6519]: Accepted publickey for core from 139.178.68.195 port 40794 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:12.632862 sshd-session[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:12.635245 kubelet[3325]: I0912 17:44:12.615490 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ghf2w" podStartSLOduration=33.538648138 podStartE2EDuration="1m3.529038779s" podCreationTimestamp="2025-09-12 17:43:09 +0000 UTC" firstStartedPulling="2025-09-12 17:43:38.089113241 +0000 UTC m=+49.927305554" lastFinishedPulling="2025-09-12 17:44:08.079503883 +0000 UTC m=+79.917696195" observedRunningTime="2025-09-12 17:44:08.506899126 +0000 UTC m=+80.345091447" watchObservedRunningTime="2025-09-12 17:44:12.529038779 +0000 UTC m=+84.367231098" Sep 12 17:44:12.656308 systemd-logind[1861]: New session 17 of user core. Sep 12 17:44:12.660100 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:44:13.464709 sshd[6522]: Connection closed by 139.178.68.195 port 40794 Sep 12 17:44:13.466858 sshd-session[6519]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:13.471244 systemd-logind[1861]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:44:13.472524 systemd[1]: sshd@16-172.31.19.53:22-139.178.68.195:40794.service: Deactivated successfully. Sep 12 17:44:13.475861 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:44:13.479244 systemd-logind[1861]: Removed session 17. Sep 12 17:44:13.498380 systemd[1]: Started sshd@17-172.31.19.53:22-139.178.68.195:40806.service - OpenSSH per-connection server daemon (139.178.68.195:40806). Sep 12 17:44:13.702197 sshd[6535]: Accepted publickey for core from 139.178.68.195 port 40806 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:13.703848 sshd-session[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:13.709963 systemd-logind[1861]: New session 18 of user core. Sep 12 17:44:13.718574 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:44:14.608547 sshd[6538]: Connection closed by 139.178.68.195 port 40806 Sep 12 17:44:14.622584 sshd-session[6535]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:14.638410 systemd[1]: sshd@17-172.31.19.53:22-139.178.68.195:40806.service: Deactivated successfully. Sep 12 17:44:14.641057 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:44:14.642528 systemd-logind[1861]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:44:14.647606 systemd[1]: Started sshd@18-172.31.19.53:22-139.178.68.195:40814.service - OpenSSH per-connection server daemon (139.178.68.195:40814). Sep 12 17:44:14.650137 systemd-logind[1861]: Removed session 18. Sep 12 17:44:14.904469 sshd[6566]: Accepted publickey for core from 139.178.68.195 port 40814 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:14.905427 sshd-session[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:14.912587 systemd-logind[1861]: New session 19 of user core. Sep 12 17:44:14.920156 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:44:16.313313 sshd[6569]: Connection closed by 139.178.68.195 port 40814 Sep 12 17:44:16.316471 sshd-session[6566]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:16.332994 systemd[1]: sshd@18-172.31.19.53:22-139.178.68.195:40814.service: Deactivated successfully. Sep 12 17:44:16.333407 systemd-logind[1861]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:44:16.339290 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:44:16.359043 systemd[1]: Started sshd@19-172.31.19.53:22-139.178.68.195:40818.service - OpenSSH per-connection server daemon (139.178.68.195:40818). Sep 12 17:44:16.362455 systemd-logind[1861]: Removed session 19. Sep 12 17:44:16.571665 sshd[6579]: Accepted publickey for core from 139.178.68.195 port 40818 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:16.574114 sshd-session[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:16.596569 systemd-logind[1861]: New session 20 of user core. Sep 12 17:44:16.609168 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:44:16.914289 sshd[6582]: Connection closed by 139.178.68.195 port 40818 Sep 12 17:44:16.916251 sshd-session[6579]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:16.924052 systemd-logind[1861]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:44:16.925347 systemd[1]: sshd@19-172.31.19.53:22-139.178.68.195:40818.service: Deactivated successfully. Sep 12 17:44:16.929897 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:44:16.933308 systemd-logind[1861]: Removed session 20. Sep 12 17:44:21.609795 containerd[1909]: time="2025-09-12T17:44:21.609739061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\" id:\"6afb37d09a6e6f99210e58738d53e1d10433a6a91b16512ad511979ba750cca8\" pid:6610 exited_at:{seconds:1757699061 nanos:535138819}" Sep 12 17:44:21.950643 systemd[1]: Started sshd@20-172.31.19.53:22-139.178.68.195:56046.service - OpenSSH per-connection server daemon (139.178.68.195:56046). Sep 12 17:44:22.211679 sshd[6620]: Accepted publickey for core from 139.178.68.195 port 56046 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:22.214174 sshd-session[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:22.222999 systemd-logind[1861]: New session 21 of user core. Sep 12 17:44:22.232169 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:44:23.241208 sshd[6631]: Connection closed by 139.178.68.195 port 56046 Sep 12 17:44:23.242192 sshd-session[6620]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:23.253331 systemd[1]: sshd@20-172.31.19.53:22-139.178.68.195:56046.service: Deactivated successfully. Sep 12 17:44:23.264252 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:44:23.273362 systemd-logind[1861]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:44:23.279166 systemd-logind[1861]: Removed session 21. Sep 12 17:44:28.272378 systemd[1]: Started sshd@21-172.31.19.53:22-139.178.68.195:56048.service - OpenSSH per-connection server daemon (139.178.68.195:56048). Sep 12 17:44:28.483792 sshd[6644]: Accepted publickey for core from 139.178.68.195 port 56048 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:28.485261 sshd-session[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:28.493608 systemd-logind[1861]: New session 22 of user core. Sep 12 17:44:28.499161 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:44:28.793564 sshd[6647]: Connection closed by 139.178.68.195 port 56048 Sep 12 17:44:28.797507 sshd-session[6644]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:28.807840 systemd[1]: sshd@21-172.31.19.53:22-139.178.68.195:56048.service: Deactivated successfully. Sep 12 17:44:28.810827 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:44:28.812991 systemd-logind[1861]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:44:28.814523 systemd-logind[1861]: Removed session 22. Sep 12 17:44:33.829673 systemd[1]: Started sshd@22-172.31.19.53:22-139.178.68.195:38266.service - OpenSSH per-connection server daemon (139.178.68.195:38266). Sep 12 17:44:34.179026 sshd[6659]: Accepted publickey for core from 139.178.68.195 port 38266 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:34.180078 sshd-session[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:34.187154 systemd-logind[1861]: New session 23 of user core. Sep 12 17:44:34.197599 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:44:35.492193 sshd[6662]: Connection closed by 139.178.68.195 port 38266 Sep 12 17:44:35.493179 sshd-session[6659]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:35.502030 systemd-logind[1861]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:44:35.503332 systemd[1]: sshd@22-172.31.19.53:22-139.178.68.195:38266.service: Deactivated successfully. Sep 12 17:44:35.507074 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:44:35.514970 systemd-logind[1861]: Removed session 23. Sep 12 17:44:37.484442 containerd[1909]: time="2025-09-12T17:44:37.476099728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92f49f703cd3ae55bdf0834b2488d9c6a8aeca8a5939a5da6c64280444efd3d0\" id:\"037c485a519bb7eeffa9bbecb3c0edb278607d595b4def1c6bbc270eb0e3aa37\" pid:6685 exited_at:{seconds:1757699077 nanos:440124244}" Sep 12 17:44:38.375084 containerd[1909]: time="2025-09-12T17:44:38.375034178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88e79e114a58fade52109eda1a7a69d9e2a5b95ea43f8736e2a13c79b3bb392e\" id:\"9fbfeee65d31171fd4e680ec2755da355de2e2e58b3876e4c867048184bb8a66\" pid:6712 exited_at:{seconds:1757699078 nanos:374390747}" Sep 12 17:44:39.102710 kubelet[3325]: I0912 17:44:39.102663 3325 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:44:39.533722 containerd[1909]: time="2025-09-12T17:44:39.533443560Z" level=info msg="StopContainer for \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" with timeout 30 (s)" Sep 12 17:44:39.650515 containerd[1909]: time="2025-09-12T17:44:39.650397109Z" level=info msg="Stop container \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" with signal terminated" Sep 12 17:44:39.800138 systemd[1]: cri-containerd-09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967.scope: Deactivated successfully. Sep 12 17:44:39.802118 systemd[1]: cri-containerd-09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967.scope: Consumed 1.680s CPU time, 58.8M memory peak, 12.5M read from disk. Sep 12 17:44:39.840223 containerd[1909]: time="2025-09-12T17:44:39.840180848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" id:\"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" pid:5755 exit_status:1 exited_at:{seconds:1757699079 nanos:837726659}" Sep 12 17:44:39.842411 containerd[1909]: time="2025-09-12T17:44:39.842370228Z" level=info msg="received exit event container_id:\"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" id:\"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" pid:5755 exit_status:1 exited_at:{seconds:1757699079 nanos:837726659}" Sep 12 17:44:40.041259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967-rootfs.mount: Deactivated successfully. Sep 12 17:44:40.213156 containerd[1909]: time="2025-09-12T17:44:40.212035711Z" level=info msg="StopContainer for \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" returns successfully" Sep 12 17:44:40.287639 containerd[1909]: time="2025-09-12T17:44:40.287558424Z" level=info msg="StopPodSandbox for \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\"" Sep 12 17:44:40.296068 containerd[1909]: time="2025-09-12T17:44:40.295993492Z" level=info msg="Container to stop \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:44:40.368217 systemd[1]: cri-containerd-1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c.scope: Deactivated successfully. Sep 12 17:44:40.431547 containerd[1909]: time="2025-09-12T17:44:40.431251264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" pid:4999 exit_status:137 exited_at:{seconds:1757699080 nanos:370038723}" Sep 12 17:44:40.470299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c-rootfs.mount: Deactivated successfully. Sep 12 17:44:40.481441 containerd[1909]: time="2025-09-12T17:44:40.481397193Z" level=info msg="shim disconnected" id=1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c namespace=k8s.io Sep 12 17:44:40.481441 containerd[1909]: time="2025-09-12T17:44:40.481441475Z" level=warning msg="cleaning up after shim disconnected" id=1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c namespace=k8s.io Sep 12 17:44:40.526237 containerd[1909]: time="2025-09-12T17:44:40.481452714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:44:40.531265 containerd[1909]: time="2025-09-12T17:44:40.481667552Z" level=error msg="Failed to handle event container_id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" pid:4999 exit_status:137 exited_at:{seconds:1757699080 nanos:370038723} for 1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" Sep 12 17:44:40.531444 systemd[1]: Started sshd@23-172.31.19.53:22-139.178.68.195:54674.service - OpenSSH per-connection server daemon (139.178.68.195:54674). Sep 12 17:44:40.885037 containerd[1909]: time="2025-09-12T17:44:40.878884078Z" level=info msg="received exit event sandbox_id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" exit_status:137 exited_at:{seconds:1757699080 nanos:370038723}" Sep 12 17:44:40.890821 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c-shm.mount: Deactivated successfully. Sep 12 17:44:40.971063 sshd[6769]: Accepted publickey for core from 139.178.68.195 port 54674 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:40.978673 sshd-session[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:40.988653 systemd-logind[1861]: New session 24 of user core. Sep 12 17:44:40.995305 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:44:41.684177 systemd-networkd[1813]: cali3b2fe3b4e85: Link DOWN Sep 12 17:44:41.684856 systemd-networkd[1813]: cali3b2fe3b4e85: Lost carrier Sep 12 17:44:41.817498 containerd[1909]: time="2025-09-12T17:44:41.817442150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" id:\"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" pid:4999 exit_status:137 exited_at:{seconds:1757699080 nanos:370038723}" Sep 12 17:44:41.885190 kubelet[3325]: I0912 17:44:41.876751 3325 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:41.656 [INFO][6797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:41.659 [INFO][6797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" iface="eth0" netns="/var/run/netns/cni-cf079c54-565c-207b-15b0-8f04d10c5c71" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:41.661 [INFO][6797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" iface="eth0" netns="/var/run/netns/cni-cf079c54-565c-207b-15b0-8f04d10c5c71" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:41.688 [INFO][6797] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" after=28.894344ms iface="eth0" netns="/var/run/netns/cni-cf079c54-565c-207b-15b0-8f04d10c5c71" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:41.688 [INFO][6797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:41.689 [INFO][6797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.163 [INFO][6811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.171 [INFO][6811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.180 [INFO][6811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.268 [INFO][6811] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.268 [INFO][6811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.271 [INFO][6811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:42.284173 containerd[1909]: 2025-09-12 17:44:42.277 [INFO][6797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:42.312842 systemd[1]: run-netns-cni\x2dcf079c54\x2d565c\x2d207b\x2d15b0\x2d8f04d10c5c71.mount: Deactivated successfully. Sep 12 17:44:42.317532 containerd[1909]: time="2025-09-12T17:44:42.314639738Z" level=info msg="TearDown network for sandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" successfully" Sep 12 17:44:42.319270 containerd[1909]: time="2025-09-12T17:44:42.319217549Z" level=info msg="StopPodSandbox for \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" returns successfully" Sep 12 17:44:42.680503 sshd[6796]: Connection closed by 139.178.68.195 port 54674 Sep 12 17:44:42.680085 sshd-session[6769]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:42.688740 systemd[1]: sshd@23-172.31.19.53:22-139.178.68.195:54674.service: Deactivated successfully. Sep 12 17:44:42.693845 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:44:42.699965 kubelet[3325]: I0912 17:44:42.699796 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1edbce95-ac87-4b0e-b3d0-642f106d2276-calico-apiserver-certs\") pod \"1edbce95-ac87-4b0e-b3d0-642f106d2276\" (UID: \"1edbce95-ac87-4b0e-b3d0-642f106d2276\") " Sep 12 17:44:42.701974 systemd-logind[1861]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:44:42.705440 systemd-logind[1861]: Removed session 24. Sep 12 17:44:42.715205 kubelet[3325]: I0912 17:44:42.699909 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc6h9\" (UniqueName: \"kubernetes.io/projected/1edbce95-ac87-4b0e-b3d0-642f106d2276-kube-api-access-wc6h9\") pod \"1edbce95-ac87-4b0e-b3d0-642f106d2276\" (UID: \"1edbce95-ac87-4b0e-b3d0-642f106d2276\") " Sep 12 17:44:42.820364 systemd[1]: var-lib-kubelet-pods-1edbce95\x2dac87\x2d4b0e\x2db3d0\x2d642f106d2276-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwc6h9.mount: Deactivated successfully. Sep 12 17:44:42.829743 systemd[1]: var-lib-kubelet-pods-1edbce95\x2dac87\x2d4b0e\x2db3d0\x2d642f106d2276-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:44:42.838657 kubelet[3325]: I0912 17:44:42.832417 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edbce95-ac87-4b0e-b3d0-642f106d2276-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "1edbce95-ac87-4b0e-b3d0-642f106d2276" (UID: "1edbce95-ac87-4b0e-b3d0-642f106d2276"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:44:42.838657 kubelet[3325]: I0912 17:44:42.829502 3325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edbce95-ac87-4b0e-b3d0-642f106d2276-kube-api-access-wc6h9" (OuterVolumeSpecName: "kube-api-access-wc6h9") pod "1edbce95-ac87-4b0e-b3d0-642f106d2276" (UID: "1edbce95-ac87-4b0e-b3d0-642f106d2276"). InnerVolumeSpecName "kube-api-access-wc6h9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:44:42.908127 systemd[1]: Removed slice kubepods-besteffort-pod1edbce95_ac87_4b0e_b3d0_642f106d2276.slice - libcontainer container kubepods-besteffort-pod1edbce95_ac87_4b0e_b3d0_642f106d2276.slice. Sep 12 17:44:42.908675 systemd[1]: kubepods-besteffort-pod1edbce95_ac87_4b0e_b3d0_642f106d2276.slice: Consumed 1.722s CPU time, 59.1M memory peak, 12.5M read from disk. Sep 12 17:44:42.926498 kubelet[3325]: I0912 17:44:42.926429 3325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wc6h9\" (UniqueName: \"kubernetes.io/projected/1edbce95-ac87-4b0e-b3d0-642f106d2276-kube-api-access-wc6h9\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:44:42.927871 kubelet[3325]: I0912 17:44:42.926515 3325 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1edbce95-ac87-4b0e-b3d0-642f106d2276-calico-apiserver-certs\") on node \"ip-172-31-19-53\" DevicePath \"\"" Sep 12 17:44:44.211129 ntpd[1851]: Deleting interface #11 cali3b2fe3b4e85, fe80::ecee:eeff:feee:eeee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Sep 12 17:44:44.213256 ntpd[1851]: 12 Sep 17:44:44 ntpd[1851]: Deleting interface #11 cali3b2fe3b4e85, fe80::ecee:eeff:feee:eeee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Sep 12 17:44:44.336559 kubelet[3325]: I0912 17:44:44.330002 3325 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edbce95-ac87-4b0e-b3d0-642f106d2276" path="/var/lib/kubelet/pods/1edbce95-ac87-4b0e-b3d0-642f106d2276/volumes" Sep 12 17:44:47.717454 systemd[1]: Started sshd@24-172.31.19.53:22-139.178.68.195:54680.service - OpenSSH per-connection server daemon (139.178.68.195:54680). Sep 12 17:44:48.159636 sshd[6839]: Accepted publickey for core from 139.178.68.195 port 54680 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:48.163979 sshd-session[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:48.184998 systemd-logind[1861]: New session 25 of user core. Sep 12 17:44:48.188194 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:44:49.155881 kubelet[3325]: I0912 17:44:49.155824 3325 scope.go:117] "RemoveContainer" containerID="09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967" Sep 12 17:44:49.420982 containerd[1909]: time="2025-09-12T17:44:49.419580955Z" level=info msg="RemoveContainer for \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\"" Sep 12 17:44:49.519398 containerd[1909]: time="2025-09-12T17:44:49.519337690Z" level=info msg="RemoveContainer for \"09982faceabffff2f52aaba5ff6e898679197bece9b729b5690031a958ee5967\" returns successfully" Sep 12 17:44:49.519827 kubelet[3325]: I0912 17:44:49.519797 3325 scope.go:117] "RemoveContainer" containerID="0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1" Sep 12 17:44:49.524069 containerd[1909]: time="2025-09-12T17:44:49.524034063Z" level=info msg="RemoveContainer for \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\"" Sep 12 17:44:49.535085 containerd[1909]: time="2025-09-12T17:44:49.535033655Z" level=info msg="RemoveContainer for \"0604b034133dd5b7fa87f327b04ba99697fd975164fc1c62b7063c475303d6f1\" returns successfully" Sep 12 17:44:49.544752 containerd[1909]: time="2025-09-12T17:44:49.543524635Z" level=info msg="StopPodSandbox for \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\"" Sep 12 17:44:49.956434 sshd[6842]: Connection closed by 139.178.68.195 port 54680 Sep 12 17:44:49.956075 sshd-session[6839]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:49.973405 systemd[1]: sshd@24-172.31.19.53:22-139.178.68.195:54680.service: Deactivated successfully. Sep 12 17:44:49.977298 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:44:49.984020 systemd-logind[1861]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:44:49.991777 systemd-logind[1861]: Removed session 25. Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:49.937 [WARNING][6859] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:49.940 [INFO][6859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:49.940 [INFO][6859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" iface="eth0" netns="" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:49.940 [INFO][6859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:49.940 [INFO][6859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.396 [INFO][6867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.402 [INFO][6867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.403 [INFO][6867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.429 [WARNING][6867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.429 [INFO][6867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.433 [INFO][6867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:50.443411 containerd[1909]: 2025-09-12 17:44:50.438 [INFO][6859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.449191 containerd[1909]: time="2025-09-12T17:44:50.444016825Z" level=info msg="TearDown network for sandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" successfully" Sep 12 17:44:50.449191 containerd[1909]: time="2025-09-12T17:44:50.444051233Z" level=info msg="StopPodSandbox for \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" returns successfully" Sep 12 17:44:50.449191 containerd[1909]: time="2025-09-12T17:44:50.445453423Z" level=info msg="RemovePodSandbox for \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\"" Sep 12 17:44:50.449191 containerd[1909]: time="2025-09-12T17:44:50.445493237Z" level=info msg="Forcibly stopping sandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\"" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.515 [WARNING][6884] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.515 [INFO][6884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.515 [INFO][6884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" iface="eth0" netns="" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.515 [INFO][6884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.515 [INFO][6884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.552 [INFO][6892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.552 [INFO][6892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.553 [INFO][6892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.575 [WARNING][6892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.575 [INFO][6892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" HandleID="k8s-pod-network.1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--2slhn-eth0" Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.577 [INFO][6892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:50.588448 containerd[1909]: 2025-09-12 17:44:50.583 [INFO][6884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c" Sep 12 17:44:50.589663 containerd[1909]: time="2025-09-12T17:44:50.588999541Z" level=info msg="TearDown network for sandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" successfully" Sep 12 17:44:50.615446 containerd[1909]: time="2025-09-12T17:44:50.615387313Z" level=info msg="Ensure that sandbox 1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c in task-service has been cleanup successfully" Sep 12 17:44:50.626183 containerd[1909]: time="2025-09-12T17:44:50.626135660Z" level=info msg="RemovePodSandbox \"1bfc388f43d69409df29cc6405e874cc91955ee4edb74db3ec267f49d7b0325c\" returns successfully" Sep 12 17:44:50.627117 containerd[1909]: time="2025-09-12T17:44:50.627084122Z" level=info msg="StopPodSandbox for \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\"" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.738 [WARNING][6906] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.739 [INFO][6906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.739 [INFO][6906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" iface="eth0" netns="" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.739 [INFO][6906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.739 [INFO][6906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.790 [INFO][6913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.790 [INFO][6913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.790 [INFO][6913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.798 [WARNING][6913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.798 [INFO][6913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.800 [INFO][6913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:50.807471 containerd[1909]: 2025-09-12 17:44:50.804 [INFO][6906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.809939 containerd[1909]: time="2025-09-12T17:44:50.807519141Z" level=info msg="TearDown network for sandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" successfully" Sep 12 17:44:50.809939 containerd[1909]: time="2025-09-12T17:44:50.807547321Z" level=info msg="StopPodSandbox for \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" returns successfully" Sep 12 17:44:50.809939 containerd[1909]: time="2025-09-12T17:44:50.808087459Z" level=info msg="RemovePodSandbox for \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\"" Sep 12 17:44:50.809939 containerd[1909]: time="2025-09-12T17:44:50.808138506Z" level=info msg="Forcibly stopping sandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\"" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.895 [WARNING][6927] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" WorkloadEndpoint="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.895 [INFO][6927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.895 [INFO][6927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" iface="eth0" netns="" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.895 [INFO][6927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.895 [INFO][6927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.947 [INFO][6934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.950 [INFO][6934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.950 [INFO][6934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.961 [WARNING][6934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.962 [INFO][6934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" HandleID="k8s-pod-network.8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Workload="ip--172--31--19--53-k8s-calico--apiserver--77fc447859--l6lrq-eth0" Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.965 [INFO][6934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:50.980390 containerd[1909]: 2025-09-12 17:44:50.972 [INFO][6927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb" Sep 12 17:44:50.980390 containerd[1909]: time="2025-09-12T17:44:50.980089541Z" level=info msg="TearDown network for sandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" successfully" Sep 12 17:44:51.036618 containerd[1909]: time="2025-09-12T17:44:51.036292049Z" level=info msg="Ensure that sandbox 8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb in task-service has been cleanup successfully" Sep 12 17:44:51.046581 containerd[1909]: time="2025-09-12T17:44:51.046533925Z" level=info msg="RemovePodSandbox \"8441dcfc06448be9e6ee2cd3b11e8751c0e260e8c550185667af0293e39fecbb\" returns successfully" Sep 12 17:44:51.904138 containerd[1909]: time="2025-09-12T17:44:51.904083873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3eb6c17a00998b617eb22408f7002d1f50e787222e96b10be971a91efb212d\" id:\"0d1036240cc8498189ec2269fd32d7c21d08af95cf28305b2e0f6f527b434a78\" pid:6953 exited_at:{seconds:1757699091 nanos:888172317}"