May 13 23:59:17.895242 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:59:17.895277 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:59:17.895294 kernel: BIOS-provided physical RAM map: May 13 23:59:17.895305 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:59:17.895315 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 13 23:59:17.895325 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 13 23:59:17.895337 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 13 23:59:17.895348 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 13 23:59:17.895359 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 13 23:59:17.895369 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 13 23:59:17.895383 kernel: NX (Execute Disable) protection: active May 13 23:59:17.895394 kernel: APIC: Static calls initialized May 13 23:59:17.895404 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 13 23:59:17.895415 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 13 23:59:17.895428 kernel: extended physical RAM map: May 13 23:59:17.895440 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:59:17.895455 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 13 23:59:17.895468 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 13 23:59:17.895480 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 13 23:59:17.895492 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 13 23:59:17.896689 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 13 23:59:17.896712 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 13 23:59:17.896728 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 13 23:59:17.896742 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 13 23:59:17.896757 kernel: efi: EFI v2.7 by EDK II May 13 23:59:17.896771 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 13 23:59:17.896792 kernel: secureboot: Secure boot disabled May 13 23:59:17.896807 kernel: SMBIOS 2.7 present. May 13 23:59:17.896821 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 13 23:59:17.896835 kernel: Hypervisor detected: KVM May 13 23:59:17.896850 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:59:17.896864 kernel: kvm-clock: using sched offset of 4146798607 cycles May 13 23:59:17.896879 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:59:17.896895 kernel: tsc: Detected 2499.998 MHz processor May 13 23:59:17.896910 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:59:17.896925 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:59:17.896939 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 13 23:59:17.896958 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:59:17.896972 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:59:17.896986 kernel: Using GB pages for direct mapping May 13 23:59:17.897005 kernel: ACPI: Early table checksum verification disabled May 13 23:59:17.897019 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 13 23:59:17.897034 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 13 23:59:17.897048 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 13 23:59:17.897066 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 13 23:59:17.897080 kernel: ACPI: FACS 0x00000000789D0000 000040 May 13 23:59:17.897093 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 13 23:59:17.897108 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 13 23:59:17.897122 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 13 23:59:17.897135 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 13 23:59:17.897151 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 13 23:59:17.897170 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 13 23:59:17.897186 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 13 23:59:17.897202 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 13 23:59:17.897218 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 13 23:59:17.897234 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 13 23:59:17.897249 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 13 23:59:17.897264 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 13 23:59:17.897278 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 13 23:59:17.897292 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 13 23:59:17.897308 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 13 23:59:17.897321 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 13 23:59:17.897334 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 13 23:59:17.897347 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 13 23:59:17.897361 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 13 23:59:17.897374 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:59:17.897387 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:59:17.897402 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 13 23:59:17.897415 kernel: NUMA: Initialized distance table, cnt=1 May 13 23:59:17.897432 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 13 23:59:17.897446 kernel: Zone ranges: May 13 23:59:17.897459 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:59:17.897473 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 13 23:59:17.897486 kernel: Normal empty May 13 23:59:17.897520 kernel: Movable zone start for each node May 13 23:59:17.897534 kernel: Early memory node ranges May 13 23:59:17.897548 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 23:59:17.897562 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 13 23:59:17.897579 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 13 23:59:17.897592 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 13 23:59:17.897606 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:59:17.897620 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 23:59:17.897634 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 23:59:17.897648 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 13 23:59:17.897661 kernel: ACPI: PM-Timer IO Port: 0xb008 May 13 23:59:17.897675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:59:17.897689 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 13 23:59:17.897703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:59:17.897720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:59:17.897734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:59:17.897747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:59:17.897761 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:59:17.897775 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:59:17.897790 kernel: TSC deadline timer available May 13 23:59:17.897803 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:59:17.897817 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:59:17.897831 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 13 23:59:17.897848 kernel: Booting paravirtualized kernel on KVM May 13 23:59:17.897862 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:59:17.897877 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:59:17.897891 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:59:17.897905 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:59:17.897919 kernel: pcpu-alloc: [0] 0 1 May 13 23:59:17.897933 kernel: kvm-guest: PV spinlocks enabled May 13 23:59:17.897947 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:59:17.897964 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:59:17.897982 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:59:17.897996 kernel: random: crng init done May 13 23:59:17.898009 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:59:17.898024 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:59:17.898039 kernel: Fallback order for Node 0: 0 May 13 23:59:17.898053 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 13 23:59:17.898067 kernel: Policy zone: DMA32 May 13 23:59:17.898081 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:59:17.898100 kernel: Memory: 1870488K/2037804K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 167060K reserved, 0K cma-reserved) May 13 23:59:17.898123 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:59:17.898137 kernel: Kernel/User page tables isolation: enabled May 13 23:59:17.898152 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:59:17.898178 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:59:17.898195 kernel: Dynamic Preempt: voluntary May 13 23:59:17.898210 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:59:17.898227 kernel: rcu: RCU event tracing is enabled. May 13 23:59:17.898242 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:59:17.898258 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:59:17.898274 kernel: Rude variant of Tasks RCU enabled. May 13 23:59:17.898292 kernel: Tracing variant of Tasks RCU enabled. May 13 23:59:17.898308 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:59:17.898323 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:59:17.898338 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 23:59:17.898353 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:59:17.898371 kernel: Console: colour dummy device 80x25 May 13 23:59:17.898386 kernel: printk: console [tty0] enabled May 13 23:59:17.898401 kernel: printk: console [ttyS0] enabled May 13 23:59:17.898417 kernel: ACPI: Core revision 20230628 May 13 23:59:17.898431 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 13 23:59:17.898447 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:59:17.898462 kernel: x2apic enabled May 13 23:59:17.898475 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:59:17.898492 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 13 23:59:17.898526 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 13 23:59:17.898540 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 23:59:17.898553 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 23:59:17.898568 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:59:17.898583 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:59:17.898598 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:59:17.898613 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 13 23:59:17.898627 kernel: RETBleed: Vulnerable May 13 23:59:17.898641 kernel: Speculative Store Bypass: Vulnerable May 13 23:59:17.898656 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:59:17.898674 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:59:17.898687 kernel: GDS: Unknown: Dependent on hypervisor status May 13 23:59:17.898701 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:59:17.898714 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:59:17.898729 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:59:17.898744 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 13 23:59:17.898759 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 13 23:59:17.898774 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 13 23:59:17.898790 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 13 23:59:17.898805 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 13 23:59:17.898820 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 13 23:59:17.898839 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:59:17.898854 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 13 23:59:17.898869 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 13 23:59:17.898883 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 13 23:59:17.898898 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 13 23:59:17.898914 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 13 23:59:17.898928 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 13 23:59:17.898943 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 13 23:59:17.898959 kernel: Freeing SMP alternatives memory: 32K May 13 23:59:17.898974 kernel: pid_max: default: 32768 minimum: 301 May 13 23:59:17.898987 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:59:17.899001 kernel: landlock: Up and running. May 13 23:59:17.899021 kernel: SELinux: Initializing. May 13 23:59:17.899037 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:59:17.899051 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:59:17.899066 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 13 23:59:17.899081 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:59:17.899098 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:59:17.899113 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:59:17.899130 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 13 23:59:17.899146 kernel: signal: max sigframe size: 3632 May 13 23:59:17.899166 kernel: rcu: Hierarchical SRCU implementation. May 13 23:59:17.899183 kernel: rcu: Max phase no-delay instances is 400. May 13 23:59:17.899200 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:59:17.899216 kernel: smp: Bringing up secondary CPUs ... May 13 23:59:17.899232 kernel: smpboot: x86: Booting SMP configuration: May 13 23:59:17.899249 kernel: .... node #0, CPUs: #1 May 13 23:59:17.899265 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 13 23:59:17.899281 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 13 23:59:17.899296 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:59:17.899314 kernel: smpboot: Max logical packages: 1 May 13 23:59:17.899329 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 13 23:59:17.899344 kernel: devtmpfs: initialized May 13 23:59:17.899360 kernel: x86/mm: Memory block size: 128MB May 13 23:59:17.899375 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 13 23:59:17.899391 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:59:17.899407 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:59:17.899423 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:59:17.899438 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:59:17.899457 kernel: audit: initializing netlink subsys (disabled) May 13 23:59:17.899473 kernel: audit: type=2000 audit(1747180758.342:1): state=initialized audit_enabled=0 res=1 May 13 23:59:17.899489 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:59:17.899525 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:59:17.899540 kernel: cpuidle: using governor menu May 13 23:59:17.899556 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:59:17.899572 kernel: dca service started, version 1.12.1 May 13 23:59:17.899588 kernel: PCI: Using configuration type 1 for base access May 13 23:59:17.899606 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:59:17.899628 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:59:17.899645 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:59:17.899662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:59:17.899679 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:59:17.899695 kernel: ACPI: Added _OSI(Module Device) May 13 23:59:17.899714 kernel: ACPI: Added _OSI(Processor Device) May 13 23:59:17.899732 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:59:17.899748 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:59:17.899764 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 13 23:59:17.899784 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:59:17.899801 kernel: ACPI: Interpreter enabled May 13 23:59:17.899816 kernel: ACPI: PM: (supports S0 S5) May 13 23:59:17.899847 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:59:17.899866 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:59:17.899885 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:59:17.899902 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 23:59:17.899916 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:59:17.900138 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 23:59:17.900288 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 23:59:17.900422 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 23:59:17.900441 kernel: acpiphp: Slot [3] registered May 13 23:59:17.900458 kernel: acpiphp: Slot [4] registered May 13 23:59:17.900474 kernel: acpiphp: Slot [5] registered May 13 23:59:17.900489 kernel: acpiphp: Slot [6] registered May 13 23:59:17.903217 kernel: acpiphp: Slot [7] registered May 13 23:59:17.903247 kernel: acpiphp: Slot [8] registered May 13 23:59:17.903263 kernel: acpiphp: Slot [9] registered May 13 23:59:17.903279 kernel: acpiphp: Slot [10] registered May 13 23:59:17.903293 kernel: acpiphp: Slot [11] registered May 13 23:59:17.903307 kernel: acpiphp: Slot [12] registered May 13 23:59:17.903321 kernel: acpiphp: Slot [13] registered May 13 23:59:17.903334 kernel: acpiphp: Slot [14] registered May 13 23:59:17.903348 kernel: acpiphp: Slot [15] registered May 13 23:59:17.903364 kernel: acpiphp: Slot [16] registered May 13 23:59:17.903380 kernel: acpiphp: Slot [17] registered May 13 23:59:17.903398 kernel: acpiphp: Slot [18] registered May 13 23:59:17.903412 kernel: acpiphp: Slot [19] registered May 13 23:59:17.903426 kernel: acpiphp: Slot [20] registered May 13 23:59:17.903439 kernel: acpiphp: Slot [21] registered May 13 23:59:17.903453 kernel: acpiphp: Slot [22] registered May 13 23:59:17.903466 kernel: acpiphp: Slot [23] registered May 13 23:59:17.903482 kernel: acpiphp: Slot [24] registered May 13 23:59:17.903517 kernel: acpiphp: Slot [25] registered May 13 23:59:17.903532 kernel: acpiphp: Slot [26] registered May 13 23:59:17.903550 kernel: acpiphp: Slot [27] registered May 13 23:59:17.903564 kernel: acpiphp: Slot [28] registered May 13 23:59:17.903578 kernel: acpiphp: Slot [29] registered May 13 23:59:17.903593 kernel: acpiphp: Slot [30] registered May 13 23:59:17.903609 kernel: acpiphp: Slot [31] registered May 13 23:59:17.903624 kernel: PCI host bridge to bus 0000:00 May 13 23:59:17.903818 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:59:17.903946 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:59:17.904074 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:59:17.904196 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 13 23:59:17.904317 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 13 23:59:17.904440 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:59:17.904617 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 23:59:17.904768 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 23:59:17.904934 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 13 23:59:17.905078 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 13 23:59:17.905217 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 13 23:59:17.905353 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 13 23:59:17.905493 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 13 23:59:17.907373 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 13 23:59:17.909358 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 13 23:59:17.909568 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 13 23:59:17.909731 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 13 23:59:17.909873 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 13 23:59:17.910011 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 23:59:17.910156 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 13 23:59:17.910290 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:59:17.910492 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 13 23:59:17.910718 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 13 23:59:17.910864 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 13 23:59:17.911000 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 13 23:59:17.911020 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:59:17.911036 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:59:17.911051 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:59:17.911066 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:59:17.911081 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 23:59:17.911101 kernel: iommu: Default domain type: Translated May 13 23:59:17.911116 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:59:17.911131 kernel: efivars: Registered efivars operations May 13 23:59:17.911147 kernel: PCI: Using ACPI for IRQ routing May 13 23:59:17.911162 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:59:17.911178 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 13 23:59:17.911193 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 13 23:59:17.911207 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 13 23:59:17.911340 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 13 23:59:17.911478 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 13 23:59:17.911625 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:59:17.911645 kernel: vgaarb: loaded May 13 23:59:17.911660 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 13 23:59:17.911675 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 13 23:59:17.911690 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:59:17.911705 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:59:17.911721 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:59:17.911740 kernel: pnp: PnP ACPI init May 13 23:59:17.911755 kernel: pnp: PnP ACPI: found 5 devices May 13 23:59:17.911770 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:59:17.911785 kernel: NET: Registered PF_INET protocol family May 13 23:59:17.911801 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:59:17.911816 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 23:59:17.911832 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:59:17.911848 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:59:17.911863 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:59:17.911881 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 23:59:17.911897 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:59:17.911912 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:59:17.911927 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:59:17.911943 kernel: NET: Registered PF_XDP protocol family May 13 23:59:17.912072 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:59:17.912195 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:59:17.912315 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:59:17.912435 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 13 23:59:17.914236 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 13 23:59:17.914423 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 23:59:17.914449 kernel: PCI: CLS 0 bytes, default 64 May 13 23:59:17.914466 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:59:17.914481 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 13 23:59:17.914495 kernel: clocksource: Switched to clocksource tsc May 13 23:59:17.914542 kernel: Initialise system trusted keyrings May 13 23:59:17.914559 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 23:59:17.914579 kernel: Key type asymmetric registered May 13 23:59:17.914595 kernel: Asymmetric key parser 'x509' registered May 13 23:59:17.914609 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:59:17.914623 kernel: io scheduler mq-deadline registered May 13 23:59:17.914637 kernel: io scheduler kyber registered May 13 23:59:17.914652 kernel: io scheduler bfq registered May 13 23:59:17.914667 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:59:17.914681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:59:17.914696 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:59:17.914717 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:59:17.914734 kernel: i8042: Warning: Keylock active May 13 23:59:17.914751 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:59:17.914768 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:59:17.914937 kernel: rtc_cmos 00:00: RTC can wake from S4 May 13 23:59:17.915062 kernel: rtc_cmos 00:00: registered as rtc0 May 13 23:59:17.915183 kernel: rtc_cmos 00:00: setting system clock to 2025-05-13T23:59:17 UTC (1747180757) May 13 23:59:17.915301 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 13 23:59:17.915324 kernel: intel_pstate: CPU model not supported May 13 23:59:17.915340 kernel: efifb: probing for efifb May 13 23:59:17.915355 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 13 23:59:17.915370 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 13 23:59:17.915408 kernel: efifb: scrolling: redraw May 13 23:59:17.915430 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:59:17.915446 kernel: Console: switching to colour frame buffer device 100x37 May 13 23:59:17.915462 kernel: fb0: EFI VGA frame buffer device May 13 23:59:17.915478 kernel: pstore: Using crash dump compression: deflate May 13 23:59:17.915496 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:59:17.915537 kernel: NET: Registered PF_INET6 protocol family May 13 23:59:17.915553 kernel: Segment Routing with IPv6 May 13 23:59:17.915569 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:59:17.915584 kernel: NET: Registered PF_PACKET protocol family May 13 23:59:17.915600 kernel: Key type dns_resolver registered May 13 23:59:17.915615 kernel: IPI shorthand broadcast: enabled May 13 23:59:17.915631 kernel: sched_clock: Marking stable (479002895, 144082391)->(692696282, -69610996) May 13 23:59:17.915647 kernel: registered taskstats version 1 May 13 23:59:17.915666 kernel: Loading compiled-in X.509 certificates May 13 23:59:17.915682 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:59:17.915698 kernel: Key type .fscrypt registered May 13 23:59:17.915713 kernel: Key type fscrypt-provisioning registered May 13 23:59:17.915729 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:59:17.915745 kernel: ima: Allocated hash algorithm: sha1 May 13 23:59:17.915761 kernel: ima: No architecture policies found May 13 23:59:17.915777 kernel: clk: Disabling unused clocks May 13 23:59:17.915794 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:59:17.915813 kernel: Write protecting the kernel read-only data: 40960k May 13 23:59:17.915829 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:59:17.915845 kernel: Run /init as init process May 13 23:59:17.915861 kernel: with arguments: May 13 23:59:17.915877 kernel: /init May 13 23:59:17.915892 kernel: with environment: May 13 23:59:17.915908 kernel: HOME=/ May 13 23:59:17.915924 kernel: TERM=linux May 13 23:59:17.915940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:59:17.915961 systemd[1]: Successfully made /usr/ read-only. May 13 23:59:17.915981 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:59:17.915997 systemd[1]: Detected virtualization amazon. May 13 23:59:17.916012 systemd[1]: Detected architecture x86-64. May 13 23:59:17.916029 systemd[1]: Running in initrd. May 13 23:59:17.916048 systemd[1]: No hostname configured, using default hostname. May 13 23:59:17.916064 systemd[1]: Hostname set to . May 13 23:59:17.916080 systemd[1]: Initializing machine ID from VM UUID. May 13 23:59:17.916096 systemd[1]: Queued start job for default target initrd.target. May 13 23:59:17.916113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:59:17.916129 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:59:17.916146 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:59:17.916167 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:59:17.916183 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:59:17.916200 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:59:17.916218 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:59:17.916235 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:59:17.916250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:59:17.916266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:59:17.916286 systemd[1]: Reached target paths.target - Path Units. May 13 23:59:17.916302 systemd[1]: Reached target slices.target - Slice Units. May 13 23:59:17.916319 systemd[1]: Reached target swap.target - Swaps. May 13 23:59:17.916335 systemd[1]: Reached target timers.target - Timer Units. May 13 23:59:17.916351 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:59:17.916368 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:59:17.916386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:59:17.916402 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:59:17.916422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:59:17.916439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:59:17.916456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:59:17.916472 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:59:17.916490 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:59:17.916528 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:59:17.916545 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:59:17.916561 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:59:17.916576 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:59:17.916599 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:59:17.916615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:17.916668 systemd-journald[179]: Collecting audit messages is disabled. May 13 23:59:17.916705 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:59:17.916726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:59:17.916745 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:59:17.916762 systemd-journald[179]: Journal started May 13 23:59:17.916801 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2d97a364148fe70c72c58ac26e0ea5) is 4.7M, max 38.1M, 33.3M free. May 13 23:59:17.922534 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:59:17.927520 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:59:17.927687 systemd-modules-load[180]: Inserted module 'overlay' May 13 23:59:17.934997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:17.940681 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:59:17.947696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:59:17.967538 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:59:17.966888 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:59:17.978643 kernel: Bridge firewalling registered May 13 23:59:17.972185 systemd-modules-load[180]: Inserted module 'br_netfilter' May 13 23:59:17.976557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:59:17.981799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:59:17.986128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:59:17.990768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:59:18.001170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:18.010165 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:59:18.004671 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:59:18.016320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:59:18.022695 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:59:18.024561 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:59:18.032313 dracut-cmdline[210]: dracut-dracut-053 May 13 23:59:18.036060 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:59:18.085747 systemd-resolved[213]: Positive Trust Anchors: May 13 23:59:18.085762 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:59:18.085828 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:59:18.095800 systemd-resolved[213]: Defaulting to hostname 'linux'. May 13 23:59:18.097287 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:59:18.098672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:59:18.127542 kernel: SCSI subsystem initialized May 13 23:59:18.138538 kernel: Loading iSCSI transport class v2.0-870. May 13 23:59:18.149550 kernel: iscsi: registered transport (tcp) May 13 23:59:18.170777 kernel: iscsi: registered transport (qla4xxx) May 13 23:59:18.170860 kernel: QLogic iSCSI HBA Driver May 13 23:59:18.210042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:59:18.212095 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:59:18.247883 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:59:18.247967 kernel: device-mapper: uevent: version 1.0.3 May 13 23:59:18.247989 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:59:18.291549 kernel: raid6: avx512x4 gen() 18061 MB/s May 13 23:59:18.309528 kernel: raid6: avx512x2 gen() 17849 MB/s May 13 23:59:18.327532 kernel: raid6: avx512x1 gen() 17462 MB/s May 13 23:59:18.344534 kernel: raid6: avx2x4 gen() 17540 MB/s May 13 23:59:18.361528 kernel: raid6: avx2x2 gen() 17708 MB/s May 13 23:59:18.378796 kernel: raid6: avx2x1 gen() 13658 MB/s May 13 23:59:18.378857 kernel: raid6: using algorithm avx512x4 gen() 18061 MB/s May 13 23:59:18.398626 kernel: raid6: .... xor() 7572 MB/s, rmw enabled May 13 23:59:18.398689 kernel: raid6: using avx512x2 recovery algorithm May 13 23:59:18.420542 kernel: xor: automatically using best checksumming function avx May 13 23:59:18.577534 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:59:18.587974 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:59:18.589959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:59:18.616065 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 13 23:59:18.622125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:59:18.626674 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:59:18.652339 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 13 23:59:18.683085 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:59:18.685010 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:59:18.756275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:59:18.760717 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:59:18.791990 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:59:18.795071 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:59:18.796452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:59:18.797607 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:59:18.802085 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:59:18.828886 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:59:18.849579 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 13 23:59:18.849871 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 13 23:59:18.867566 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 13 23:59:18.886290 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:59:18.893526 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:44:c7:b7:38:f3 May 13 23:59:18.894785 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:59:18.895780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:18.897663 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:59:18.898674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:18.898965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:18.900517 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. May 13 23:59:18.902342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:18.927078 kernel: nvme nvme0: pci function 0000:00:04.0 May 13 23:59:18.927321 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 23:59:18.927343 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:59:18.927362 kernel: AES CTR mode by8 optimization enabled May 13 23:59:18.906991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:18.923490 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:18.935766 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 13 23:59:18.940859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:18.940979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:18.942787 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:18.960114 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:59:18.960149 kernel: GPT:9289727 != 16777215 May 13 23:59:18.960206 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:59:18.960229 kernel: GPT:9289727 != 16777215 May 13 23:59:18.960248 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:59:18.960268 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:59:18.945101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:18.976837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:18.981109 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:59:19.011800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:19.030666 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (459) May 13 23:59:19.044529 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (447) May 13 23:59:19.082902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 13 23:59:19.114127 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 13 23:59:19.135454 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 13 23:59:19.136027 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 13 23:59:19.147754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 13 23:59:19.157223 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:59:19.177583 disk-uuid[633]: Primary Header is updated. May 13 23:59:19.177583 disk-uuid[633]: Secondary Entries is updated. May 13 23:59:19.177583 disk-uuid[633]: Secondary Header is updated. May 13 23:59:19.184553 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:59:19.200531 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:59:20.197527 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:59:20.199167 disk-uuid[634]: The operation has completed successfully. May 13 23:59:20.325390 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:59:20.325534 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:59:20.347422 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:59:20.361798 sh[892]: Success May 13 23:59:20.383533 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:59:20.480863 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:59:20.486204 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:59:20.495042 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:59:20.517593 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:59:20.517665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:20.517680 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:59:20.520911 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:59:20.520994 kernel: BTRFS info (device dm-0): using free space tree May 13 23:59:20.586574 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 23:59:20.603523 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:59:20.604576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:59:20.605446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:59:20.608635 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:59:20.651345 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:59:20.651427 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:20.651449 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:59:20.658531 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:59:20.665583 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:59:20.668818 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:59:20.671670 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:59:20.706845 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:59:20.709788 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:59:20.745498 systemd-networkd[1081]: lo: Link UP May 13 23:59:20.745521 systemd-networkd[1081]: lo: Gained carrier May 13 23:59:20.746883 systemd-networkd[1081]: Enumeration completed May 13 23:59:20.747191 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:20.747196 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:59:20.748346 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:59:20.749836 systemd[1]: Reached target network.target - Network. May 13 23:59:20.751588 systemd-networkd[1081]: eth0: Link UP May 13 23:59:20.751592 systemd-networkd[1081]: eth0: Gained carrier May 13 23:59:20.751603 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:20.769341 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.19.86/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 13 23:59:20.997599 ignition[1035]: Ignition 2.20.0 May 13 23:59:20.997611 ignition[1035]: Stage: fetch-offline May 13 23:59:20.999497 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:59:20.997794 ignition[1035]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:20.997802 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:20.998026 ignition[1035]: Ignition finished successfully May 13 23:59:21.003152 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:59:21.029224 ignition[1090]: Ignition 2.20.0 May 13 23:59:21.029239 ignition[1090]: Stage: fetch May 13 23:59:21.029690 ignition[1090]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:21.029705 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:21.029833 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:21.048333 ignition[1090]: PUT result: OK May 13 23:59:21.051226 ignition[1090]: parsed url from cmdline: "" May 13 23:59:21.051238 ignition[1090]: no config URL provided May 13 23:59:21.051250 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:59:21.051278 ignition[1090]: no config at "/usr/lib/ignition/user.ign" May 13 23:59:21.051303 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:21.052026 ignition[1090]: PUT result: OK May 13 23:59:21.052097 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 13 23:59:21.053100 ignition[1090]: GET result: OK May 13 23:59:21.053225 ignition[1090]: parsing config with SHA512: d08f37f2d086d7bd3e9800af32778b423b428412d5a18478930f561cc9482cfb57d137de684a533081603fc6219b6722b80146530386bdda3f7d6c8c109ed896 May 13 23:59:21.059122 unknown[1090]: fetched base config from "system" May 13 23:59:21.059137 unknown[1090]: fetched base config from "system" May 13 23:59:21.059726 ignition[1090]: fetch: fetch complete May 13 23:59:21.059144 unknown[1090]: fetched user config from "aws" May 13 23:59:21.059735 ignition[1090]: fetch: fetch passed May 13 23:59:21.059799 ignition[1090]: Ignition finished successfully May 13 23:59:21.062600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:59:21.064046 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:59:21.090329 ignition[1096]: Ignition 2.20.0 May 13 23:59:21.090345 ignition[1096]: Stage: kargs May 13 23:59:21.090824 ignition[1096]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:21.090838 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:21.090978 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:21.091884 ignition[1096]: PUT result: OK May 13 23:59:21.094423 ignition[1096]: kargs: kargs passed May 13 23:59:21.094512 ignition[1096]: Ignition finished successfully May 13 23:59:21.095847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:59:21.097700 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:59:21.124482 ignition[1102]: Ignition 2.20.0 May 13 23:59:21.124495 ignition[1102]: Stage: disks May 13 23:59:21.124947 ignition[1102]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:21.124962 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:21.125087 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:21.126087 ignition[1102]: PUT result: OK May 13 23:59:21.128869 ignition[1102]: disks: disks passed May 13 23:59:21.128942 ignition[1102]: Ignition finished successfully May 13 23:59:21.130752 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:59:21.131352 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:59:21.131748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:59:21.132278 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:59:21.132837 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:59:21.133379 systemd[1]: Reached target basic.target - Basic System. May 13 23:59:21.135137 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:59:21.186928 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:59:21.190201 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:59:21.192858 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:59:21.302517 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:59:21.303237 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:59:21.304129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:59:21.313148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:59:21.316612 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:59:21.317792 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:59:21.317860 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:59:21.317894 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:59:21.328134 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:59:21.330433 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:59:21.344621 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1129) May 13 23:59:21.347780 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:59:21.347850 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:21.350353 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:59:21.365523 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:59:21.367964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:59:21.600239 initrd-setup-root[1153]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:59:21.606823 initrd-setup-root[1160]: cut: /sysroot/etc/group: No such file or directory May 13 23:59:21.621274 initrd-setup-root[1167]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:59:21.639729 initrd-setup-root[1174]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:59:21.914456 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:59:21.916539 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:59:21.920711 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:59:21.934471 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:59:21.936561 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:59:21.969651 ignition[1241]: INFO : Ignition 2.20.0 May 13 23:59:21.969651 ignition[1241]: INFO : Stage: mount May 13 23:59:21.971907 ignition[1241]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:59:21.971907 ignition[1241]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:21.971907 ignition[1241]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:21.973653 ignition[1241]: INFO : PUT result: OK May 13 23:59:21.974463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:59:21.976565 ignition[1241]: INFO : mount: mount passed May 13 23:59:21.977609 ignition[1241]: INFO : Ignition finished successfully May 13 23:59:21.978495 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:59:21.980252 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:59:21.998612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:59:22.036521 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1253) May 13 23:59:22.040342 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:59:22.040407 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:22.040422 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:59:22.047755 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:59:22.049421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:59:22.077798 ignition[1270]: INFO : Ignition 2.20.0 May 13 23:59:22.077798 ignition[1270]: INFO : Stage: files May 13 23:59:22.079272 ignition[1270]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:59:22.079272 ignition[1270]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:22.079272 ignition[1270]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:22.080523 ignition[1270]: INFO : PUT result: OK May 13 23:59:22.082870 ignition[1270]: DEBUG : files: compiled without relabeling support, skipping May 13 23:59:22.083957 ignition[1270]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:59:22.083957 ignition[1270]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:59:22.102718 ignition[1270]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:59:22.103571 ignition[1270]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:59:22.103571 ignition[1270]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:59:22.103352 unknown[1270]: wrote ssh authorized keys file for user: core May 13 23:59:22.116924 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:59:22.117898 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:59:22.223432 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:59:22.480610 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:59:22.480610 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:22.484108 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 23:59:22.546637 systemd-networkd[1081]: eth0: Gained IPv6LL May 13 23:59:22.810621 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:59:26.075698 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:26.075698 ignition[1270]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:59:26.095457 ignition[1270]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:59:26.096639 ignition[1270]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:59:26.096639 ignition[1270]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:59:26.096639 ignition[1270]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:59:26.096639 ignition[1270]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:59:26.096639 ignition[1270]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:59:26.096639 ignition[1270]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:59:26.096639 ignition[1270]: INFO : files: files passed May 13 23:59:26.096639 ignition[1270]: INFO : Ignition finished successfully May 13 23:59:26.097583 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:59:26.101691 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:59:26.106800 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:59:26.122773 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:59:26.122913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:59:26.130687 initrd-setup-root-after-ignition[1300]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:59:26.130687 initrd-setup-root-after-ignition[1300]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:59:26.133284 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:59:26.132956 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:59:26.134398 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:59:26.137322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:59:26.188168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:59:26.188308 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:59:26.189535 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:59:26.190750 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:59:26.191573 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:59:26.193249 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:59:26.219547 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:59:26.221639 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:59:26.240618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:59:26.241320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:59:26.242417 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:59:26.243303 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:59:26.243487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:59:26.244675 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:59:26.245551 systemd[1]: Stopped target basic.target - Basic System. May 13 23:59:26.246527 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:59:26.247295 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:59:26.248086 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:59:26.248873 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:59:26.249631 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:59:26.250489 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:59:26.251676 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:59:26.252392 systemd[1]: Stopped target swap.target - Swaps. May 13 23:59:26.253111 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:59:26.253295 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:59:26.254521 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:59:26.255310 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:59:26.255991 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:59:26.256129 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:59:26.256816 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:59:26.257036 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:59:26.258460 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:59:26.258701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:59:26.259383 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:59:26.259567 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:59:26.263176 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:59:26.267718 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:59:26.268309 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:59:26.268541 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:59:26.270662 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:59:26.270838 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:59:26.277542 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:59:26.281687 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:59:26.295005 ignition[1324]: INFO : Ignition 2.20.0 May 13 23:59:26.295005 ignition[1324]: INFO : Stage: umount May 13 23:59:26.297124 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:59:26.297124 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:59:26.297124 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:59:26.301687 ignition[1324]: INFO : PUT result: OK May 13 23:59:26.301687 ignition[1324]: INFO : umount: umount passed May 13 23:59:26.301687 ignition[1324]: INFO : Ignition finished successfully May 13 23:59:26.303896 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:59:26.304028 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:59:26.305204 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:59:26.305317 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:59:26.307042 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:59:26.307120 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:59:26.307606 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:59:26.307672 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:59:26.308277 systemd[1]: Stopped target network.target - Network. May 13 23:59:26.308957 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:59:26.309031 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:59:26.309650 systemd[1]: Stopped target paths.target - Path Units. May 13 23:59:26.310218 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:59:26.310290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:59:26.310676 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:59:26.311299 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:59:26.311802 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:59:26.311860 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:59:26.312336 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:59:26.312386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:59:26.312899 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:59:26.312974 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:59:26.315208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:59:26.315286 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:59:26.316123 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:59:26.316695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:59:26.320967 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:59:26.322005 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:59:26.322303 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:59:26.326557 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:59:26.326976 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:59:26.327116 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:59:26.328203 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:59:26.328324 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:59:26.330375 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:59:26.332104 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:59:26.332806 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:59:26.333284 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:59:26.333356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:59:26.336624 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:59:26.337730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:59:26.338388 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:59:26.338997 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:59:26.339056 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:59:26.341032 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:59:26.341092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:59:26.341794 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:59:26.341854 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:59:26.342686 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:59:26.346230 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:59:26.346318 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:26.348879 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:59:26.349053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:59:26.351166 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:59:26.351246 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:59:26.354571 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:59:26.354625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:59:26.355893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:59:26.355960 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:59:26.357187 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:59:26.357255 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:59:26.358415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:59:26.358483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:26.362446 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:59:26.365648 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:59:26.366475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:59:26.367780 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:59:26.367859 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:59:26.368898 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:59:26.368968 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:59:26.369655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:26.369718 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:26.373147 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:59:26.373237 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:26.375901 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:59:26.376025 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:59:26.379915 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:59:26.380057 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:59:26.381810 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:59:26.383773 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:59:26.399106 systemd[1]: Switching root. May 13 23:59:26.435319 systemd-journald[179]: Journal stopped May 13 23:59:27.819281 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). May 13 23:59:27.819382 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:59:27.819413 kernel: SELinux: policy capability open_perms=1 May 13 23:59:27.819431 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:59:27.819454 kernel: SELinux: policy capability always_check_network=0 May 13 23:59:27.819471 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:59:27.819490 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:59:27.819520 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:59:27.819537 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:59:27.819554 kernel: audit: type=1403 audit(1747180766.661:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:59:27.819576 systemd[1]: Successfully loaded SELinux policy in 43.412ms. May 13 23:59:27.819606 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.553ms. May 13 23:59:27.819626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:59:27.819645 systemd[1]: Detected virtualization amazon. May 13 23:59:27.819664 systemd[1]: Detected architecture x86-64. May 13 23:59:27.819683 systemd[1]: Detected first boot. May 13 23:59:27.819702 systemd[1]: Initializing machine ID from VM UUID. May 13 23:59:27.819720 zram_generator::config[1369]: No configuration found. May 13 23:59:27.819754 kernel: Guest personality initialized and is inactive May 13 23:59:27.819775 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:59:27.819791 kernel: Initialized host personality May 13 23:59:27.819809 kernel: NET: Registered PF_VSOCK protocol family May 13 23:59:27.819827 systemd[1]: Populated /etc with preset unit settings. May 13 23:59:27.819848 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:59:27.819867 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:59:27.819887 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:59:27.819905 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:59:27.819928 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:59:27.819946 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:59:27.819965 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:59:27.819984 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:59:27.820003 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:59:27.820023 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:59:27.820042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:59:27.820061 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:59:27.820081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:59:27.820103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:59:27.820122 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:59:27.820142 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:59:27.820161 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:59:27.820181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:59:27.820201 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:59:27.820220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:59:27.820240 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:59:27.820264 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:59:27.820284 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:59:27.820304 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:59:27.820324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:59:27.820344 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:59:27.820364 systemd[1]: Reached target slices.target - Slice Units. May 13 23:59:27.820383 systemd[1]: Reached target swap.target - Swaps. May 13 23:59:27.820402 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:59:27.820422 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:59:27.820444 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:59:27.820464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:59:27.820484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:59:27.820536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:59:27.820582 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:59:27.820600 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:59:27.820619 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:59:27.820636 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:59:27.820655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:27.820677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:59:27.820695 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:59:27.820712 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:59:27.820730 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:59:27.820748 systemd[1]: Reached target machines.target - Containers. May 13 23:59:27.820765 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:59:27.820782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:59:27.820799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:59:27.820820 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:59:27.820838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:59:27.820857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:59:27.820874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:59:27.820892 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:59:27.820909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:59:27.820927 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:59:27.820946 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:59:27.820969 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:59:27.820989 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:59:27.821008 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:59:27.821030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:59:27.821049 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:59:27.821068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:59:27.821087 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:59:27.821104 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:59:27.821123 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:59:27.821148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:59:27.821168 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:59:27.821185 systemd[1]: Stopped verity-setup.service. May 13 23:59:27.821210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:27.821233 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:59:27.821253 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:59:27.821313 systemd-journald[1455]: Collecting audit messages is disabled. May 13 23:59:27.821360 systemd-journald[1455]: Journal started May 13 23:59:27.821405 systemd-journald[1455]: Runtime Journal (/run/log/journal/ec2d97a364148fe70c72c58ac26e0ea5) is 4.7M, max 38.1M, 33.3M free. May 13 23:59:27.559245 systemd[1]: Queued start job for default target multi-user.target. May 13 23:59:27.568093 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 13 23:59:27.568655 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:59:27.824551 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:59:27.825663 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:59:27.827730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:59:27.829037 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:59:27.830690 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:59:27.831978 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:59:27.833810 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:59:27.834015 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:59:27.835953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:59:27.836174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:59:27.838166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:59:27.838389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:59:27.841741 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:59:27.860524 kernel: ACPI: bus type drm_connector registered May 13 23:59:27.862585 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:59:27.862842 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:59:27.865672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:59:27.873733 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:59:27.875274 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:59:27.875430 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:59:27.880302 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:59:27.893522 kernel: loop: module loaded May 13 23:59:27.894189 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:59:27.897777 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:59:27.898639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:59:27.905726 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:59:27.917538 kernel: fuse: init (API version 7.39) May 13 23:59:27.922993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:59:27.924546 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:59:27.927738 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:59:27.934706 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:59:27.939727 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:59:27.948785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:59:27.956443 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:59:27.958945 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:59:27.968948 systemd-journald[1455]: Time spent on flushing to /var/log/journal/ec2d97a364148fe70c72c58ac26e0ea5 is 60.130ms for 1001 entries. May 13 23:59:27.968948 systemd-journald[1455]: System Journal (/var/log/journal/ec2d97a364148fe70c72c58ac26e0ea5) is 8M, max 195.6M, 187.6M free. May 13 23:59:28.045783 systemd-journald[1455]: Received client request to flush runtime journal. May 13 23:59:28.045866 kernel: loop0: detected capacity change from 0 to 151640 May 13 23:59:27.959459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:59:27.961318 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:59:27.962746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:59:27.963754 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:59:27.966041 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:59:27.966993 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:59:27.969735 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:59:27.972220 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:59:27.984063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:59:27.990153 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:59:27.994750 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:59:27.995395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:59:28.026440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:59:28.054287 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:59:28.090292 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:59:28.143875 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. May 13 23:59:28.143904 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. May 13 23:59:28.148531 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:59:28.156190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:59:28.162865 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:59:28.170261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:59:28.182700 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:59:28.187528 kernel: loop1: detected capacity change from 0 to 210664 May 13 23:59:28.209896 udevadm[1524]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:59:28.238016 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:59:28.242483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:59:28.279806 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. May 13 23:59:28.280652 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. May 13 23:59:28.303809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:59:28.352556 kernel: loop2: detected capacity change from 0 to 64352 May 13 23:59:28.493766 kernel: loop3: detected capacity change from 0 to 109808 May 13 23:59:28.570996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:59:28.575416 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:59:28.595159 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:59:28.632702 kernel: loop4: detected capacity change from 0 to 151640 May 13 23:59:28.674919 kernel: loop5: detected capacity change from 0 to 210664 May 13 23:59:28.708532 kernel: loop6: detected capacity change from 0 to 64352 May 13 23:59:28.723105 kernel: loop7: detected capacity change from 0 to 109808 May 13 23:59:28.736413 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 13 23:59:28.737328 (sd-merge)[1536]: Merged extensions into '/usr'. May 13 23:59:28.742419 systemd[1]: Reload requested from client PID 1500 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:59:28.742577 systemd[1]: Reloading... May 13 23:59:28.839662 zram_generator::config[1564]: No configuration found. May 13 23:59:28.979491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:29.054571 systemd[1]: Reloading finished in 311 ms. May 13 23:59:29.072351 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:59:29.073167 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:59:29.084040 systemd[1]: Starting ensure-sysext.service... May 13 23:59:29.087329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:59:29.095703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:59:29.121673 systemd[1]: Reload requested from client PID 1616 ('systemctl') (unit ensure-sysext.service)... May 13 23:59:29.121692 systemd[1]: Reloading... May 13 23:59:29.148492 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:59:29.148906 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:59:29.152238 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:59:29.152685 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. May 13 23:59:29.152775 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. May 13 23:59:29.170701 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:59:29.170721 systemd-tmpfiles[1617]: Skipping /boot May 13 23:59:29.183187 systemd-udevd[1618]: Using default interface naming scheme 'v255'. May 13 23:59:29.194685 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:59:29.194841 systemd-tmpfiles[1617]: Skipping /boot May 13 23:59:29.311528 zram_generator::config[1655]: No configuration found. May 13 23:59:29.331907 ldconfig[1492]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:59:29.379132 (udev-worker)[1651]: Network interface NamePolicy= disabled on kernel command line. May 13 23:59:29.494527 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:59:29.499595 kernel: ACPI: button: Power Button [PWRF] May 13 23:59:29.506534 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 13 23:59:29.514528 kernel: ACPI: button: Sleep Button [SLPF] May 13 23:59:29.529598 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 13 23:59:29.549613 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 13 23:59:29.587571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:29.608526 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1663) May 13 23:59:29.725583 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:59:29.768773 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:59:29.770676 systemd[1]: Reloading finished in 648 ms. May 13 23:59:29.779493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:59:29.781560 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:59:29.784573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:59:29.847934 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:59:29.852035 systemd[1]: Finished ensure-sysext.service. May 13 23:59:29.881913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 13 23:59:29.882696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:29.884050 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:29.887780 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:59:29.889806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:59:29.896251 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:59:29.898770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:59:29.901724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:59:29.906089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:59:29.911866 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:59:29.913088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:59:29.915810 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:59:29.917486 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:59:29.923749 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:59:29.930585 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:59:29.943523 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:59:29.946674 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:59:29.948602 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:59:29.952815 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:59:29.957744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:29.958645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:29.960701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:59:29.964851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:59:29.998744 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:59:30.000122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:59:30.000372 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:59:30.001427 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:59:30.001727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:59:30.005153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:59:30.005258 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:59:30.019527 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:59:30.021735 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:59:30.023307 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:59:30.031760 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:59:30.037915 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:59:30.049746 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:59:30.051358 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:59:30.081230 lvm[1849]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:59:30.105449 augenrules[1856]: No rules May 13 23:59:30.108990 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:30.109275 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:30.116337 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:59:30.122336 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:59:30.123657 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:59:30.128710 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:59:30.130003 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:59:30.131216 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:59:30.159879 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:59:30.217294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:30.253050 systemd-resolved[1828]: Positive Trust Anchors: May 13 23:59:30.253469 systemd-resolved[1828]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:59:30.253631 systemd-resolved[1828]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:59:30.255781 systemd-networkd[1826]: lo: Link UP May 13 23:59:30.255791 systemd-networkd[1826]: lo: Gained carrier May 13 23:59:30.257249 systemd-networkd[1826]: Enumeration completed May 13 23:59:30.257387 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:59:30.257920 systemd-networkd[1826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:30.258029 systemd-networkd[1826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:59:30.260433 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:59:30.261794 systemd-networkd[1826]: eth0: Link UP May 13 23:59:30.262878 systemd-networkd[1826]: eth0: Gained carrier May 13 23:59:30.263358 systemd-networkd[1826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:30.263836 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:59:30.274172 systemd-resolved[1828]: Defaulting to hostname 'linux'. May 13 23:59:30.274352 systemd-networkd[1826]: eth0: DHCPv4 address 172.31.19.86/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 13 23:59:30.277037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:59:30.277559 systemd[1]: Reached target network.target - Network. May 13 23:59:30.277951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:59:30.278942 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:59:30.279370 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:59:30.279761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:59:30.280229 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:59:30.280671 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:59:30.281091 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:59:30.281409 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:59:30.281441 systemd[1]: Reached target paths.target - Path Units. May 13 23:59:30.281765 systemd[1]: Reached target timers.target - Timer Units. May 13 23:59:30.284181 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:59:30.286046 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:59:30.289037 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:59:30.289930 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:59:30.290646 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:59:30.293451 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:59:30.294569 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:59:30.295941 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:59:30.296558 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:59:30.297663 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:59:30.298203 systemd[1]: Reached target basic.target - Basic System. May 13 23:59:30.298687 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:59:30.298728 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:59:30.300029 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:59:30.304656 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:59:30.311678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:59:30.314532 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:59:30.317790 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:59:30.324447 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:59:30.326168 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:59:30.331366 systemd[1]: Started ntpd.service - Network Time Service. May 13 23:59:30.336267 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:59:30.343028 systemd[1]: Starting setup-oem.service - Setup OEM... May 13 23:59:30.372711 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:59:30.380588 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:59:30.396892 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:59:30.399439 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:59:30.401241 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:59:30.405287 jq[1886]: false May 13 23:59:30.406796 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:59:30.412245 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:59:30.416843 extend-filesystems[1887]: Found loop4 May 13 23:59:30.428454 extend-filesystems[1887]: Found loop5 May 13 23:59:30.428454 extend-filesystems[1887]: Found loop6 May 13 23:59:30.428454 extend-filesystems[1887]: Found loop7 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p1 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p2 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p3 May 13 23:59:30.428454 extend-filesystems[1887]: Found usr May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p4 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p6 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p7 May 13 23:59:30.428454 extend-filesystems[1887]: Found nvme0n1p9 May 13 23:59:30.428454 extend-filesystems[1887]: Checking size of /dev/nvme0n1p9 May 13 23:59:30.424087 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:59:30.425919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:59:30.456047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:59:30.456343 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:59:30.471268 update_engine[1898]: I20250513 23:59:30.471158 1898 main.cc:92] Flatcar Update Engine starting May 13 23:59:30.484651 ntpd[1889]: ntpd 4.2.8p17@1.4004-o Tue May 13 21:33:08 UTC 2025 (1): Starting May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: ntpd 4.2.8p17@1.4004-o Tue May 13 21:33:08 UTC 2025 (1): Starting May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: ---------------------------------------------------- May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: ntp-4 is maintained by Network Time Foundation, May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: corporation. Support and training for ntp-4 are May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: available at https://www.nwtime.org/support May 13 23:59:30.491564 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: ---------------------------------------------------- May 13 23:59:30.488592 ntpd[1889]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 13 23:59:30.507184 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: proto: precision = 0.096 usec (-23) May 13 23:59:30.507184 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: basedate set to 2025-05-01 May 13 23:59:30.507184 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: gps base set to 2025-05-04 (week 2365) May 13 23:59:30.488604 ntpd[1889]: ---------------------------------------------------- May 13 23:59:30.488613 ntpd[1889]: ntp-4 is maintained by Network Time Foundation, May 13 23:59:30.488623 ntpd[1889]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 13 23:59:30.488632 ntpd[1889]: corporation. Support and training for ntp-4 are May 13 23:59:30.488641 ntpd[1889]: available at https://www.nwtime.org/support May 13 23:59:30.488653 ntpd[1889]: ---------------------------------------------------- May 13 23:59:30.499746 ntpd[1889]: proto: precision = 0.096 usec (-23) May 13 23:59:30.502776 ntpd[1889]: basedate set to 2025-05-01 May 13 23:59:30.502797 ntpd[1889]: gps base set to 2025-05-04 (week 2365) May 13 23:59:30.507927 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:59:30.506756 dbus-daemon[1885]: [system] SELinux support is enabled May 13 23:59:30.522603 extend-filesystems[1887]: Resized partition /dev/nvme0n1p9 May 13 23:59:30.523142 ntpd[1889]: Listen and drop on 0 v6wildcard [::]:123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Listen and drop on 0 v6wildcard [::]:123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Listen normally on 2 lo 127.0.0.1:123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Listen normally on 3 eth0 172.31.19.86:123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Listen normally on 4 lo [::1]:123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: bind(21) AF_INET6 fe80::444:c7ff:feb7:38f3%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: unable to create socket on eth0 (5) for fe80::444:c7ff:feb7:38f3%2#123 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: failed to init interface for address fe80::444:c7ff:feb7:38f3%2 May 13 23:59:30.527689 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: Listening on routing socket on fd #21 for interface updates May 13 23:59:30.524034 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:59:30.523200 ntpd[1889]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 13 23:59:30.524094 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:59:30.526679 ntpd[1889]: Listen normally on 2 lo 127.0.0.1:123 May 13 23:59:30.526422 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:59:30.526728 ntpd[1889]: Listen normally on 3 eth0 172.31.19.86:123 May 13 23:59:30.526451 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:59:30.526768 ntpd[1889]: Listen normally on 4 lo [::1]:123 May 13 23:59:30.540658 extend-filesystems[1924]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:59:30.526818 ntpd[1889]: bind(21) AF_INET6 fe80::444:c7ff:feb7:38f3%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:59:30.526839 ntpd[1889]: unable to create socket on eth0 (5) for fe80::444:c7ff:feb7:38f3%2#123 May 13 23:59:30.526855 ntpd[1889]: failed to init interface for address fe80::444:c7ff:feb7:38f3%2 May 13 23:59:30.526893 ntpd[1889]: Listening on routing socket on fd #21 for interface updates May 13 23:59:30.543788 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 13 23:59:30.534890 dbus-daemon[1885]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1826 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 13 23:59:30.537321 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:59:30.552846 jq[1899]: true May 13 23:59:30.552022 (ntainerd)[1922]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:59:30.551467 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:59:30.553460 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:59:30.560281 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 13 23:59:30.560389 ntpd[1889]: 13 May 23:59:30 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:59:30.558806 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:59:30.561622 update_engine[1898]: I20250513 23:59:30.561094 1898 update_check_scheduler.cc:74] Next update check in 9m38s May 13 23:59:30.580201 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:59:30.581217 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:59:30.594837 systemd[1]: Started update-engine.service - Update Engine. May 13 23:59:30.606714 tar[1904]: linux-amd64/helm May 13 23:59:30.620053 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:59:30.630655 coreos-metadata[1884]: May 13 23:59:30.630 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 13 23:59:30.642817 coreos-metadata[1884]: May 13 23:59:30.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 13 23:59:30.642817 coreos-metadata[1884]: May 13 23:59:30.637 INFO Fetch successful May 13 23:59:30.642817 coreos-metadata[1884]: May 13 23:59:30.637 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 13 23:59:30.644704 coreos-metadata[1884]: May 13 23:59:30.644 INFO Fetch successful May 13 23:59:30.644704 coreos-metadata[1884]: May 13 23:59:30.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 13 23:59:30.645526 coreos-metadata[1884]: May 13 23:59:30.645 INFO Fetch successful May 13 23:59:30.645526 coreos-metadata[1884]: May 13 23:59:30.645 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 13 23:59:30.646005 coreos-metadata[1884]: May 13 23:59:30.645 INFO Fetch successful May 13 23:59:30.646005 coreos-metadata[1884]: May 13 23:59:30.645 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 13 23:59:30.648017 coreos-metadata[1884]: May 13 23:59:30.646 INFO Fetch failed with 404: resource not found May 13 23:59:30.648017 coreos-metadata[1884]: May 13 23:59:30.646 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 13 23:59:30.649666 coreos-metadata[1884]: May 13 23:59:30.649 INFO Fetch successful May 13 23:59:30.649666 coreos-metadata[1884]: May 13 23:59:30.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 13 23:59:30.651375 coreos-metadata[1884]: May 13 23:59:30.650 INFO Fetch successful May 13 23:59:30.651375 coreos-metadata[1884]: May 13 23:59:30.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 13 23:59:30.660316 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 13 23:59:30.661767 coreos-metadata[1884]: May 13 23:59:30.658 INFO Fetch successful May 13 23:59:30.661767 coreos-metadata[1884]: May 13 23:59:30.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 13 23:59:30.674882 coreos-metadata[1884]: May 13 23:59:30.667 INFO Fetch successful May 13 23:59:30.674882 coreos-metadata[1884]: May 13 23:59:30.667 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 13 23:59:30.674882 coreos-metadata[1884]: May 13 23:59:30.672 INFO Fetch successful May 13 23:59:30.676803 jq[1932]: true May 13 23:59:30.685538 extend-filesystems[1924]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 13 23:59:30.685538 extend-filesystems[1924]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:59:30.685538 extend-filesystems[1924]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 13 23:59:30.684779 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:59:30.717018 extend-filesystems[1887]: Resized filesystem in /dev/nvme0n1p9 May 13 23:59:30.685080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:59:30.691318 systemd[1]: Finished setup-oem.service - Setup OEM. May 13 23:59:30.755064 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:59:30.756160 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:59:30.800535 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1651) May 13 23:59:30.806131 systemd-logind[1897]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:59:30.812698 systemd-logind[1897]: Watching system buttons on /dev/input/event2 (Sleep Button) May 13 23:59:30.812733 systemd-logind[1897]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:59:30.822875 systemd-logind[1897]: New seat seat0. May 13 23:59:30.827181 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:59:30.945655 locksmithd[1937]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:59:30.954279 bash[2015]: Updated "/home/core/.ssh/authorized_keys" May 13 23:59:30.958089 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:59:30.964782 systemd[1]: Starting sshkeys.service... May 13 23:59:31.049449 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:59:31.063325 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:59:31.071084 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:59:31.074127 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 13 23:59:31.076792 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.hostname1' May 13 23:59:31.089686 dbus-daemon[1885]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1930 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 13 23:59:31.099884 systemd[1]: Starting polkit.service - Authorization Manager... May 13 23:59:31.225382 polkitd[2052]: Started polkitd version 121 May 13 23:59:31.293188 polkitd[2052]: Loading rules from directory /etc/polkit-1/rules.d May 13 23:59:31.294185 sshd_keygen[1921]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:59:31.295146 polkitd[2052]: Loading rules from directory /usr/share/polkit-1/rules.d May 13 23:59:31.297541 polkitd[2052]: Finished loading, compiling and executing 2 rules May 13 23:59:31.299188 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 13 23:59:31.299410 systemd[1]: Started polkit.service - Authorization Manager. May 13 23:59:31.302245 polkitd[2052]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 13 23:59:31.314677 systemd-networkd[1826]: eth0: Gained IPv6LL May 13 23:59:31.341584 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:59:31.346595 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:59:31.351429 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 13 23:59:31.358422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:31.362669 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:59:31.430291 containerd[1922]: time="2025-05-13T23:59:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:59:31.431074 containerd[1922]: time="2025-05-13T23:59:31.431038125Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:59:31.443563 systemd-hostnamed[1930]: Hostname set to (transient) May 13 23:59:31.444952 systemd-resolved[1828]: System hostname changed to 'ip-172-31-19-86'. May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450530152Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.231µs" May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450575189Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450601371Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450793828Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450815927Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450848871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450916027Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.450931801Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.451260127Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.451283969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.451301482Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:59:31.453168 containerd[1922]: time="2025-05-13T23:59:31.451313734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:59:31.455366 coreos-metadata[2051]: May 13 23:59:31.452 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 13 23:59:31.455366 coreos-metadata[2051]: May 13 23:59:31.454 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.451414823Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.451706919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.451751597Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.451769926Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.451822096Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.452319532Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.452412336Z" level=info msg="metadata content store policy set" policy=shared May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457449442Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457546835Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457570349Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457642104Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457664577Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457681498Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:59:31.461991 containerd[1922]: time="2025-05-13T23:59:31.457700155Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457719252Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457756270Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457772552Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457789211Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457811598Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457962915Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.457990700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458023915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458054705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458073605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458095918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458114274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458131095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458151047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:59:31.464994 containerd[1922]: time="2025-05-13T23:59:31.458168148Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:59:31.479031 containerd[1922]: time="2025-05-13T23:59:31.458184438Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:59:31.479031 containerd[1922]: time="2025-05-13T23:59:31.458269382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:59:31.479031 containerd[1922]: time="2025-05-13T23:59:31.458288354Z" level=info msg="Start snapshots syncer" May 13 23:59:31.479031 containerd[1922]: time="2025-05-13T23:59:31.458331337Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:59:31.466210 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:59:31.481724 coreos-metadata[2051]: May 13 23:59:31.465 INFO Fetch successful May 13 23:59:31.481724 coreos-metadata[2051]: May 13 23:59:31.465 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 23:59:31.481724 coreos-metadata[2051]: May 13 23:59:31.475 INFO Fetch successful May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.458734140Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.458808182Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459241047Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459443866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459476631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459496094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459531033Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459549361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459567913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459585008Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459620376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459645211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.459659436Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461215047Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461251822Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461267681Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461285450Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461299043Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461315627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461336322Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461362187Z" level=info msg="runtime interface created" May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461370432Z" level=info msg="created NRI interface" May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461382226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461402489Z" level=info msg="Connect containerd service" May 13 23:59:31.484298 containerd[1922]: time="2025-05-13T23:59:31.461453211Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:59:31.470298 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:59:31.487976 containerd[1922]: time="2025-05-13T23:59:31.482297441Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:59:31.479590 unknown[2051]: wrote ssh authorized keys file for user: core May 13 23:59:31.480265 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:59:31.486421 systemd[1]: Started sshd@0-172.31.19.86:22-147.75.109.163:56992.service - OpenSSH per-connection server daemon (147.75.109.163:56992). May 13 23:59:31.541071 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:59:31.541371 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:59:31.549742 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:59:31.576806 update-ssh-keys[2113]: Updated "/home/core/.ssh/authorized_keys" May 13 23:59:31.580591 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:59:31.587028 amazon-ssm-agent[2087]: Initializing new seelog logger May 13 23:59:31.587028 amazon-ssm-agent[2087]: New Seelog Logger Creation Complete May 13 23:59:31.587028 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.587028 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.587953 systemd[1]: Finished sshkeys.service. May 13 23:59:31.591490 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 processing appconfig overrides May 13 23:59:31.593565 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.593676 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.597682 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 processing appconfig overrides May 13 23:59:31.597682 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.597682 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.597682 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 processing appconfig overrides May 13 23:59:31.603533 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO Proxy environment variables: May 13 23:59:31.610561 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.610561 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:59:31.610561 amazon-ssm-agent[2087]: 2025/05/13 23:59:31 processing appconfig overrides May 13 23:59:31.621163 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:59:31.626422 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:59:31.630333 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:59:31.631652 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:59:31.703541 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO https_proxy: May 13 23:59:31.807236 sshd[2106]: Accepted publickey for core from 147.75.109.163 port 56992 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:31.805417 sshd-session[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:31.809186 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO http_proxy: May 13 23:59:31.829389 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:59:31.833877 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:59:31.863658 systemd-logind[1897]: New session 1 of user core. May 13 23:59:31.899695 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:59:31.908652 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO no_proxy: May 13 23:59:31.909092 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:59:31.929040 (systemd)[2138]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:59:31.934103 systemd-logind[1897]: New session c1 of user core. May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942071409Z" level=info msg="Start subscribing containerd event" May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942153679Z" level=info msg="Start recovering state" May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942279630Z" level=info msg="Start event monitor" May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942296690Z" level=info msg="Start cni network conf syncer for default" May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942308462Z" level=info msg="Start streaming server" May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942332276Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942342944Z" level=info msg="runtime interface starting up..." May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942352160Z" level=info msg="starting plugins..." May 13 23:59:31.943278 containerd[1922]: time="2025-05-13T23:59:31.942367785Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:59:31.948720 containerd[1922]: time="2025-05-13T23:59:31.945014508Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:59:31.948720 containerd[1922]: time="2025-05-13T23:59:31.945085938Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:59:31.948720 containerd[1922]: time="2025-05-13T23:59:31.945154606Z" level=info msg="containerd successfully booted in 0.515400s" May 13 23:59:31.945252 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:59:32.009625 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO Checking if agent identity type OnPrem can be assumed May 13 23:59:32.107845 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO Checking if agent identity type EC2 can be assumed May 13 23:59:32.207220 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO Agent will take identity from EC2 May 13 23:59:32.254895 systemd[2138]: Queued start job for default target default.target. May 13 23:59:32.261945 systemd[2138]: Created slice app.slice - User Application Slice. May 13 23:59:32.261991 systemd[2138]: Reached target paths.target - Paths. May 13 23:59:32.262164 systemd[2138]: Reached target timers.target - Timers. May 13 23:59:32.268665 systemd[2138]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:59:32.278262 tar[1904]: linux-amd64/LICENSE May 13 23:59:32.278262 tar[1904]: linux-amd64/README.md May 13 23:59:32.289777 systemd[2138]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:59:32.291259 systemd[2138]: Reached target sockets.target - Sockets. May 13 23:59:32.291344 systemd[2138]: Reached target basic.target - Basic System. May 13 23:59:32.291398 systemd[2138]: Reached target default.target - Main User Target. May 13 23:59:32.291439 systemd[2138]: Startup finished in 330ms. May 13 23:59:32.291826 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:59:32.301603 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:59:32.310133 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:59:32.305551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:59:32.408518 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:59:32.465409 systemd[1]: Started sshd@1-172.31.19.86:22-147.75.109.163:56994.service - OpenSSH per-connection server daemon (147.75.109.163:56994). May 13 23:59:32.506573 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:59:32.556888 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 13 23:59:32.556888 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 13 23:59:32.556888 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] Starting Core Agent May 13 23:59:32.556888 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 13 23:59:32.557091 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [Registrar] Starting registrar module May 13 23:59:32.557091 amazon-ssm-agent[2087]: 2025-05-13 23:59:31 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 13 23:59:32.557091 amazon-ssm-agent[2087]: 2025-05-13 23:59:32 INFO [EC2Identity] EC2 registration was successful. May 13 23:59:32.557091 amazon-ssm-agent[2087]: 2025-05-13 23:59:32 INFO [CredentialRefresher] credentialRefresher has started May 13 23:59:32.557091 amazon-ssm-agent[2087]: 2025-05-13 23:59:32 INFO [CredentialRefresher] Starting credentials refresher loop May 13 23:59:32.557091 amazon-ssm-agent[2087]: 2025-05-13 23:59:32 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 13 23:59:32.605917 amazon-ssm-agent[2087]: 2025-05-13 23:59:32 INFO [CredentialRefresher] Next credential rotation will be in 30.1916591065 minutes May 13 23:59:32.648436 sshd[2154]: Accepted publickey for core from 147.75.109.163 port 56994 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:32.650421 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:32.656627 systemd-logind[1897]: New session 2 of user core. May 13 23:59:32.660784 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:59:32.781194 sshd[2156]: Connection closed by 147.75.109.163 port 56994 May 13 23:59:32.781816 sshd-session[2154]: pam_unix(sshd:session): session closed for user core May 13 23:59:32.784689 systemd[1]: sshd@1-172.31.19.86:22-147.75.109.163:56994.service: Deactivated successfully. May 13 23:59:32.787004 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:59:32.789411 systemd-logind[1897]: Session 2 logged out. Waiting for processes to exit. May 13 23:59:32.790861 systemd-logind[1897]: Removed session 2. May 13 23:59:32.816730 systemd[1]: Started sshd@2-172.31.19.86:22-147.75.109.163:56998.service - OpenSSH per-connection server daemon (147.75.109.163:56998). May 13 23:59:32.990991 sshd[2162]: Accepted publickey for core from 147.75.109.163 port 56998 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:32.993164 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:32.998565 systemd-logind[1897]: New session 3 of user core. May 13 23:59:33.003748 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:59:33.122578 sshd[2164]: Connection closed by 147.75.109.163 port 56998 May 13 23:59:33.123137 sshd-session[2162]: pam_unix(sshd:session): session closed for user core May 13 23:59:33.126311 systemd[1]: sshd@2-172.31.19.86:22-147.75.109.163:56998.service: Deactivated successfully. May 13 23:59:33.128061 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:59:33.129911 systemd-logind[1897]: Session 3 logged out. Waiting for processes to exit. May 13 23:59:33.131344 systemd-logind[1897]: Removed session 3. May 13 23:59:33.489050 ntpd[1889]: Listen normally on 6 eth0 [fe80::444:c7ff:feb7:38f3%2]:123 May 13 23:59:33.489412 ntpd[1889]: 13 May 23:59:33 ntpd[1889]: Listen normally on 6 eth0 [fe80::444:c7ff:feb7:38f3%2]:123 May 13 23:59:33.575719 amazon-ssm-agent[2087]: 2025-05-13 23:59:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 13 23:59:33.677015 amazon-ssm-agent[2087]: 2025-05-13 23:59:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2171) started May 13 23:59:33.777652 amazon-ssm-agent[2087]: 2025-05-13 23:59:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 13 23:59:33.850582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:33.851538 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:59:33.852653 systemd[1]: Startup finished in 608ms (kernel) + 8.963s (initrd) + 7.233s (userspace) = 16.805s. May 13 23:59:33.856694 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:35.110939 kubelet[2188]: E0513 23:59:35.110856 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:35.113224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:35.113378 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:35.113886 systemd[1]: kubelet.service: Consumed 1.031s CPU time, 247.4M memory peak. May 13 23:59:38.949975 systemd-resolved[1828]: Clock change detected. Flushing caches. May 13 23:59:44.615892 systemd[1]: Started sshd@3-172.31.19.86:22-147.75.109.163:60992.service - OpenSSH per-connection server daemon (147.75.109.163:60992). May 13 23:59:44.783437 sshd[2201]: Accepted publickey for core from 147.75.109.163 port 60992 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:44.784810 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:44.791711 systemd-logind[1897]: New session 4 of user core. May 13 23:59:44.802034 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:59:44.919262 sshd[2203]: Connection closed by 147.75.109.163 port 60992 May 13 23:59:44.920263 sshd-session[2201]: pam_unix(sshd:session): session closed for user core May 13 23:59:44.923663 systemd[1]: sshd@3-172.31.19.86:22-147.75.109.163:60992.service: Deactivated successfully. May 13 23:59:44.925682 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:59:44.927264 systemd-logind[1897]: Session 4 logged out. Waiting for processes to exit. May 13 23:59:44.928432 systemd-logind[1897]: Removed session 4. May 13 23:59:44.949134 systemd[1]: Started sshd@4-172.31.19.86:22-147.75.109.163:32768.service - OpenSSH per-connection server daemon (147.75.109.163:32768). May 13 23:59:45.127399 sshd[2209]: Accepted publickey for core from 147.75.109.163 port 32768 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:45.129079 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:45.133871 systemd-logind[1897]: New session 5 of user core. May 13 23:59:45.137970 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:59:45.255352 sshd[2211]: Connection closed by 147.75.109.163 port 32768 May 13 23:59:45.256255 sshd-session[2209]: pam_unix(sshd:session): session closed for user core May 13 23:59:45.259975 systemd[1]: sshd@4-172.31.19.86:22-147.75.109.163:32768.service: Deactivated successfully. May 13 23:59:45.262158 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:59:45.263195 systemd-logind[1897]: Session 5 logged out. Waiting for processes to exit. May 13 23:59:45.264290 systemd-logind[1897]: Removed session 5. May 13 23:59:45.291939 systemd[1]: Started sshd@5-172.31.19.86:22-147.75.109.163:32784.service - OpenSSH per-connection server daemon (147.75.109.163:32784). May 13 23:59:45.455757 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 32784 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:45.457172 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:45.462394 systemd-logind[1897]: New session 6 of user core. May 13 23:59:45.468044 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:59:45.585003 sshd[2219]: Connection closed by 147.75.109.163 port 32784 May 13 23:59:45.586302 sshd-session[2217]: pam_unix(sshd:session): session closed for user core May 13 23:59:45.589526 systemd[1]: sshd@5-172.31.19.86:22-147.75.109.163:32784.service: Deactivated successfully. May 13 23:59:45.591686 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:59:45.593290 systemd-logind[1897]: Session 6 logged out. Waiting for processes to exit. May 13 23:59:45.594579 systemd-logind[1897]: Removed session 6. May 13 23:59:45.619178 systemd[1]: Started sshd@6-172.31.19.86:22-147.75.109.163:32800.service - OpenSSH per-connection server daemon (147.75.109.163:32800). May 13 23:59:45.790163 sshd[2225]: Accepted publickey for core from 147.75.109.163 port 32800 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:45.791784 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:45.796916 systemd-logind[1897]: New session 7 of user core. May 13 23:59:45.804057 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:59:45.914085 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:59:45.914424 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:45.927757 sudo[2228]: pam_unix(sudo:session): session closed for user root May 13 23:59:45.949821 sshd[2227]: Connection closed by 147.75.109.163 port 32800 May 13 23:59:45.951083 sshd-session[2225]: pam_unix(sshd:session): session closed for user core May 13 23:59:45.955163 systemd[1]: sshd@6-172.31.19.86:22-147.75.109.163:32800.service: Deactivated successfully. May 13 23:59:45.957232 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:59:45.958844 systemd-logind[1897]: Session 7 logged out. Waiting for processes to exit. May 13 23:59:45.960166 systemd-logind[1897]: Removed session 7. May 13 23:59:45.983632 systemd[1]: Started sshd@7-172.31.19.86:22-147.75.109.163:32810.service - OpenSSH per-connection server daemon (147.75.109.163:32810). May 13 23:59:46.160000 sshd[2234]: Accepted publickey for core from 147.75.109.163 port 32810 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:46.161845 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:46.166814 systemd-logind[1897]: New session 8 of user core. May 13 23:59:46.174040 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:59:46.272173 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:59:46.272564 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:46.276597 sudo[2238]: pam_unix(sudo:session): session closed for user root May 13 23:59:46.282699 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:59:46.283134 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:46.293938 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:46.334673 augenrules[2260]: No rules May 13 23:59:46.336132 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:46.336414 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:46.337982 sudo[2237]: pam_unix(sudo:session): session closed for user root May 13 23:59:46.360057 sshd[2236]: Connection closed by 147.75.109.163 port 32810 May 13 23:59:46.360586 sshd-session[2234]: pam_unix(sshd:session): session closed for user core May 13 23:59:46.364687 systemd[1]: sshd@7-172.31.19.86:22-147.75.109.163:32810.service: Deactivated successfully. May 13 23:59:46.366540 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:59:46.367241 systemd-logind[1897]: Session 8 logged out. Waiting for processes to exit. May 13 23:59:46.368156 systemd-logind[1897]: Removed session 8. May 13 23:59:46.393902 systemd[1]: Started sshd@8-172.31.19.86:22-147.75.109.163:32814.service - OpenSSH per-connection server daemon (147.75.109.163:32814). May 13 23:59:46.559491 sshd[2269]: Accepted publickey for core from 147.75.109.163 port 32814 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:46.560847 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:46.565615 systemd-logind[1897]: New session 9 of user core. May 13 23:59:46.571958 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:59:46.574455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:59:46.576212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:46.670751 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:59:46.671122 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:46.808443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:46.820360 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:46.875305 kubelet[2289]: E0513 23:59:46.875245 2289 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:46.880567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:46.880804 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:46.881752 systemd[1]: kubelet.service: Consumed 171ms CPU time, 95.2M memory peak. May 13 23:59:47.126013 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:59:47.139521 (dockerd)[2307]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:59:47.479647 dockerd[2307]: time="2025-05-13T23:59:47.478946087Z" level=info msg="Starting up" May 13 23:59:47.482417 dockerd[2307]: time="2025-05-13T23:59:47.482366521Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:59:47.531274 systemd[1]: var-lib-docker-metacopy\x2dcheck1893638506-merged.mount: Deactivated successfully. May 13 23:59:47.546190 dockerd[2307]: time="2025-05-13T23:59:47.546108587Z" level=info msg="Loading containers: start." May 13 23:59:47.738480 kernel: Initializing XFRM netlink socket May 13 23:59:47.739182 (udev-worker)[2331]: Network interface NamePolicy= disabled on kernel command line. May 13 23:59:47.825304 systemd-networkd[1826]: docker0: Link UP May 13 23:59:47.883614 dockerd[2307]: time="2025-05-13T23:59:47.883555357Z" level=info msg="Loading containers: done." May 13 23:59:47.900985 dockerd[2307]: time="2025-05-13T23:59:47.900909983Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:59:47.901160 dockerd[2307]: time="2025-05-13T23:59:47.901012467Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:59:47.901160 dockerd[2307]: time="2025-05-13T23:59:47.901137484Z" level=info msg="Daemon has completed initialization" May 13 23:59:47.936050 dockerd[2307]: time="2025-05-13T23:59:47.935984815Z" level=info msg="API listen on /run/docker.sock" May 13 23:59:47.936180 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:59:49.215516 containerd[1922]: time="2025-05-13T23:59:49.215465490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:59:49.796458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517535763.mount: Deactivated successfully. May 13 23:59:52.103424 containerd[1922]: time="2025-05-13T23:59:52.103357568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:52.104269 containerd[1922]: time="2025-05-13T23:59:52.104200409Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 23:59:52.105566 containerd[1922]: time="2025-05-13T23:59:52.105506168Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:52.112814 containerd[1922]: time="2025-05-13T23:59:52.111925585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:52.112814 containerd[1922]: time="2025-05-13T23:59:52.112636816Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.897132094s" May 13 23:59:52.112814 containerd[1922]: time="2025-05-13T23:59:52.112677190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 23:59:52.132810 containerd[1922]: time="2025-05-13T23:59:52.132737752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:59:54.411525 containerd[1922]: time="2025-05-13T23:59:54.411379935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:54.412643 containerd[1922]: time="2025-05-13T23:59:54.412447964Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 23:59:54.413646 containerd[1922]: time="2025-05-13T23:59:54.413604552Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:54.416791 containerd[1922]: time="2025-05-13T23:59:54.416353942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:54.417194 containerd[1922]: time="2025-05-13T23:59:54.417001844Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.284040755s" May 13 23:59:54.417194 containerd[1922]: time="2025-05-13T23:59:54.417029790Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 23:59:54.436931 containerd[1922]: time="2025-05-13T23:59:54.436868812Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:59:56.083912 containerd[1922]: time="2025-05-13T23:59:56.083859365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:56.085380 containerd[1922]: time="2025-05-13T23:59:56.085303168Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 23:59:56.087516 containerd[1922]: time="2025-05-13T23:59:56.087445691Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:56.090824 containerd[1922]: time="2025-05-13T23:59:56.090736857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:56.092174 containerd[1922]: time="2025-05-13T23:59:56.091529254Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.654623039s" May 13 23:59:56.092174 containerd[1922]: time="2025-05-13T23:59:56.091565346Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 23:59:56.111791 containerd[1922]: time="2025-05-13T23:59:56.111729502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:59:57.131637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:59:57.135432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:57.247357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787269475.mount: Deactivated successfully. May 13 23:59:57.418538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:57.429368 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:57.512484 kubelet[2613]: E0513 23:59:57.512025 2613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:57.516599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:57.516823 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:57.518443 systemd[1]: kubelet.service: Consumed 200ms CPU time, 94.2M memory peak. May 13 23:59:57.919001 containerd[1922]: time="2025-05-13T23:59:57.918789839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:57.920559 containerd[1922]: time="2025-05-13T23:59:57.920489597Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 23:59:57.922890 containerd[1922]: time="2025-05-13T23:59:57.922809429Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:57.925895 containerd[1922]: time="2025-05-13T23:59:57.925833882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:57.926675 containerd[1922]: time="2025-05-13T23:59:57.926618772Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.814830281s" May 13 23:59:57.926675 containerd[1922]: time="2025-05-13T23:59:57.926665382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 23:59:57.946342 containerd[1922]: time="2025-05-13T23:59:57.946290345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:59:58.470516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736318299.mount: Deactivated successfully. May 13 23:59:59.472747 containerd[1922]: time="2025-05-13T23:59:59.472678463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:59.474339 containerd[1922]: time="2025-05-13T23:59:59.474076342Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:59:59.475844 containerd[1922]: time="2025-05-13T23:59:59.475780873Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:59.480803 containerd[1922]: time="2025-05-13T23:59:59.479573871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:59.481295 containerd[1922]: time="2025-05-13T23:59:59.481088316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.534745866s" May 13 23:59:59.481295 containerd[1922]: time="2025-05-13T23:59:59.481136369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:59:59.503976 containerd[1922]: time="2025-05-13T23:59:59.503933727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:59:59.968443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910043988.mount: Deactivated successfully. May 13 23:59:59.973048 containerd[1922]: time="2025-05-13T23:59:59.972984273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:59.973819 containerd[1922]: time="2025-05-13T23:59:59.973684476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 23:59:59.975967 containerd[1922]: time="2025-05-13T23:59:59.975925902Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:59.978615 containerd[1922]: time="2025-05-13T23:59:59.977840454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:59.978615 containerd[1922]: time="2025-05-13T23:59:59.978492491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 474.452499ms" May 13 23:59:59.978615 containerd[1922]: time="2025-05-13T23:59:59.978522021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 23:59:59.997282 containerd[1922]: time="2025-05-13T23:59:59.997240765Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:00:00.585426 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:00.611151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751448894.mount: Deactivated successfully. May 14 00:00:00.634677 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:02.957687 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 14 00:00:05.965964 containerd[1922]: time="2025-05-14T00:00:05.965902673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.968415 containerd[1922]: time="2025-05-14T00:00:05.967645864Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 14 00:00:05.973041 containerd[1922]: time="2025-05-14T00:00:05.972523116Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.981869 containerd[1922]: time="2025-05-14T00:00:05.981821961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.985565 containerd[1922]: time="2025-05-14T00:00:05.985513374Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.988227295s" May 14 00:00:05.985799 containerd[1922]: time="2025-05-14T00:00:05.985775031Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 00:00:07.635586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 00:00:07.641048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:07.988971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:08.016585 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:00:08.104202 kubelet[2830]: E0514 00:00:08.104138 2830 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:00:08.107055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:00:08.107242 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:00:08.107615 systemd[1]: kubelet.service: Consumed 216ms CPU time, 95.7M memory peak. May 14 00:00:09.560993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:09.561264 systemd[1]: kubelet.service: Consumed 216ms CPU time, 95.7M memory peak. May 14 00:00:09.564173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:09.592518 systemd[1]: Reload requested from client PID 2845 ('systemctl') (unit session-9.scope)... May 14 00:00:09.592537 systemd[1]: Reloading... May 14 00:00:09.752863 zram_generator::config[2890]: No configuration found. May 14 00:00:09.904441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:10.023792 systemd[1]: Reloading finished in 430 ms. May 14 00:00:10.094984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:10.099061 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:10.099341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:10.099413 systemd[1]: kubelet.service: Consumed 133ms CPU time, 83.1M memory peak. May 14 00:00:10.101338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:10.312436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:10.324343 (kubelet)[2955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:10.377670 kubelet[2955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:10.377670 kubelet[2955]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:10.377670 kubelet[2955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:10.379733 kubelet[2955]: I0514 00:00:10.379644 2955 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:11.018759 kubelet[2955]: I0514 00:00:11.018700 2955 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:11.018759 kubelet[2955]: I0514 00:00:11.018749 2955 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:11.019133 kubelet[2955]: I0514 00:00:11.019110 2955 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:11.051576 kubelet[2955]: I0514 00:00:11.051532 2955 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:11.054796 kubelet[2955]: E0514 00:00:11.054632 2955 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.077828 kubelet[2955]: I0514 00:00:11.077795 2955 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:11.084835 kubelet[2955]: I0514 00:00:11.084744 2955 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:11.088448 kubelet[2955]: I0514 00:00:11.084820 2955 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:11.089426 kubelet[2955]: I0514 00:00:11.089377 2955 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:11.089426 kubelet[2955]: I0514 00:00:11.089410 2955 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:11.089566 kubelet[2955]: I0514 00:00:11.089555 2955 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:11.090861 kubelet[2955]: I0514 00:00:11.090752 2955 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:11.090861 kubelet[2955]: I0514 00:00:11.090802 2955 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:11.090861 kubelet[2955]: I0514 00:00:11.090827 2955 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:11.090861 kubelet[2955]: I0514 00:00:11.090845 2955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:11.095796 kubelet[2955]: W0514 00:00:11.095487 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.095796 kubelet[2955]: E0514 00:00:11.095640 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.095796 kubelet[2955]: I0514 00:00:11.095730 2955 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:00:11.097853 kubelet[2955]: I0514 00:00:11.097819 2955 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:11.097956 kubelet[2955]: W0514 00:00:11.097882 2955 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:00:11.098538 kubelet[2955]: I0514 00:00:11.098445 2955 server.go:1264] "Started kubelet" May 14 00:00:11.106993 kubelet[2955]: W0514 00:00:11.105942 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-86&limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.106993 kubelet[2955]: E0514 00:00:11.105996 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-86&limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.106993 kubelet[2955]: I0514 00:00:11.106215 2955 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:11.106993 kubelet[2955]: I0514 00:00:11.106759 2955 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:11.107202 kubelet[2955]: I0514 00:00:11.107161 2955 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:11.107405 kubelet[2955]: E0514 00:00:11.107299 2955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.86:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.86:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-86.183f3bb0d3e6762c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-86,UID:ip-172-31-19-86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-86,},FirstTimestamp:2025-05-14 00:00:11.09842078 +0000 UTC m=+0.770118597,LastTimestamp:2025-05-14 00:00:11.09842078 +0000 UTC m=+0.770118597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-86,}" May 14 00:00:11.108094 kubelet[2955]: I0514 00:00:11.108078 2955 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:11.109475 kubelet[2955]: I0514 00:00:11.108737 2955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:11.110390 kubelet[2955]: I0514 00:00:11.110372 2955 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:11.113178 kubelet[2955]: I0514 00:00:11.113157 2955 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:11.115513 kubelet[2955]: I0514 00:00:11.114903 2955 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:11.115513 kubelet[2955]: W0514 00:00:11.115273 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.115513 kubelet[2955]: E0514 00:00:11.115316 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.117395 kubelet[2955]: E0514 00:00:11.117359 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-86?timeout=10s\": dial tcp 172.31.19.86:6443: connect: connection refused" interval="200ms" May 14 00:00:11.118141 kubelet[2955]: I0514 00:00:11.118065 2955 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:11.118141 kubelet[2955]: I0514 00:00:11.118138 2955 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:11.118890 kubelet[2955]: E0514 00:00:11.118864 2955 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:00:11.121856 kubelet[2955]: I0514 00:00:11.121839 2955 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:11.144312 kubelet[2955]: I0514 00:00:11.144243 2955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:11.148686 kubelet[2955]: I0514 00:00:11.148651 2955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:11.148686 kubelet[2955]: I0514 00:00:11.148687 2955 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:11.148888 kubelet[2955]: I0514 00:00:11.148711 2955 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:11.148888 kubelet[2955]: E0514 00:00:11.148813 2955 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:11.161373 kubelet[2955]: I0514 00:00:11.161349 2955 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:11.161556 kubelet[2955]: I0514 00:00:11.161543 2955 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:11.161679 kubelet[2955]: I0514 00:00:11.161669 2955 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:11.161825 kubelet[2955]: W0514 00:00:11.161777 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.161906 kubelet[2955]: E0514 00:00:11.161842 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:11.165473 kubelet[2955]: I0514 00:00:11.165447 2955 policy_none.go:49] "None policy: Start" May 14 00:00:11.167060 kubelet[2955]: I0514 00:00:11.167038 2955 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:11.167673 kubelet[2955]: I0514 00:00:11.167379 2955 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:11.174682 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:00:11.184344 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:00:11.188379 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:00:11.198970 kubelet[2955]: I0514 00:00:11.198945 2955 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:11.199274 kubelet[2955]: I0514 00:00:11.199243 2955 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:11.199456 kubelet[2955]: I0514 00:00:11.199446 2955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:11.202278 kubelet[2955]: E0514 00:00:11.202253 2955 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-86\" not found" May 14 00:00:11.213139 kubelet[2955]: I0514 00:00:11.213090 2955 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-86" May 14 00:00:11.213696 kubelet[2955]: E0514 00:00:11.213663 2955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.86:6443/api/v1/nodes\": dial tcp 172.31.19.86:6443: connect: connection refused" node="ip-172-31-19-86" May 14 00:00:11.249555 kubelet[2955]: I0514 00:00:11.249230 2955 topology_manager.go:215] "Topology Admit Handler" podUID="c5d84278d0c3ad09d0b8c07b142c94c1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-86" May 14 00:00:11.251673 kubelet[2955]: I0514 00:00:11.251627 2955 topology_manager.go:215] "Topology Admit Handler" podUID="1cc34decbf7da5ffc50614924fb724e0" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-86" May 14 00:00:11.253305 kubelet[2955]: I0514 00:00:11.253277 2955 topology_manager.go:215] "Topology Admit Handler" podUID="7d263c36c3e83ed2446df7a66f76b7c5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-86" May 14 00:00:11.261586 systemd[1]: Created slice kubepods-burstable-podc5d84278d0c3ad09d0b8c07b142c94c1.slice - libcontainer container kubepods-burstable-podc5d84278d0c3ad09d0b8c07b142c94c1.slice. May 14 00:00:11.272173 systemd[1]: Created slice kubepods-burstable-pod1cc34decbf7da5ffc50614924fb724e0.slice - libcontainer container kubepods-burstable-pod1cc34decbf7da5ffc50614924fb724e0.slice. May 14 00:00:11.288737 systemd[1]: Created slice kubepods-burstable-pod7d263c36c3e83ed2446df7a66f76b7c5.slice - libcontainer container kubepods-burstable-pod7d263c36c3e83ed2446df7a66f76b7c5.slice. May 14 00:00:11.316854 kubelet[2955]: I0514 00:00:11.316620 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:11.316854 kubelet[2955]: I0514 00:00:11.316657 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5d84278d0c3ad09d0b8c07b142c94c1-ca-certs\") pod \"kube-apiserver-ip-172-31-19-86\" (UID: \"c5d84278d0c3ad09d0b8c07b142c94c1\") " pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:11.316854 kubelet[2955]: I0514 00:00:11.316675 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5d84278d0c3ad09d0b8c07b142c94c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-86\" (UID: \"c5d84278d0c3ad09d0b8c07b142c94c1\") " pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:11.316854 kubelet[2955]: I0514 00:00:11.316694 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:11.316854 kubelet[2955]: I0514 00:00:11.316711 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:11.317090 kubelet[2955]: I0514 00:00:11.316727 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5d84278d0c3ad09d0b8c07b142c94c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-86\" (UID: \"c5d84278d0c3ad09d0b8c07b142c94c1\") " pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:11.317090 kubelet[2955]: I0514 00:00:11.316743 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:11.317090 kubelet[2955]: I0514 00:00:11.316757 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:11.317090 kubelet[2955]: I0514 00:00:11.316786 2955 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d263c36c3e83ed2446df7a66f76b7c5-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-86\" (UID: \"7d263c36c3e83ed2446df7a66f76b7c5\") " pod="kube-system/kube-scheduler-ip-172-31-19-86" May 14 00:00:11.318491 kubelet[2955]: E0514 00:00:11.318431 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-86?timeout=10s\": dial tcp 172.31.19.86:6443: connect: connection refused" interval="400ms" May 14 00:00:11.416506 kubelet[2955]: I0514 00:00:11.416473 2955 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-86" May 14 00:00:11.417399 kubelet[2955]: E0514 00:00:11.417352 2955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.86:6443/api/v1/nodes\": dial tcp 172.31.19.86:6443: connect: connection refused" node="ip-172-31-19-86" May 14 00:00:11.571391 containerd[1922]: time="2025-05-14T00:00:11.571259863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-86,Uid:c5d84278d0c3ad09d0b8c07b142c94c1,Namespace:kube-system,Attempt:0,}" May 14 00:00:11.585742 containerd[1922]: time="2025-05-14T00:00:11.585685552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-86,Uid:1cc34decbf7da5ffc50614924fb724e0,Namespace:kube-system,Attempt:0,}" May 14 00:00:11.593466 containerd[1922]: time="2025-05-14T00:00:11.592739017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-86,Uid:7d263c36c3e83ed2446df7a66f76b7c5,Namespace:kube-system,Attempt:0,}" May 14 00:00:11.720156 kubelet[2955]: E0514 00:00:11.720099 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-86?timeout=10s\": dial tcp 172.31.19.86:6443: connect: connection refused" interval="800ms" May 14 00:00:11.819905 kubelet[2955]: I0514 00:00:11.819843 2955 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-86" May 14 00:00:11.820137 kubelet[2955]: E0514 00:00:11.820114 2955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.86:6443/api/v1/nodes\": dial tcp 172.31.19.86:6443: connect: connection refused" node="ip-172-31-19-86" May 14 00:00:12.029049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682662903.mount: Deactivated successfully. May 14 00:00:12.033301 containerd[1922]: time="2025-05-14T00:00:12.033255547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:12.035602 containerd[1922]: time="2025-05-14T00:00:12.035550649Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:12.037773 containerd[1922]: time="2025-05-14T00:00:12.037663242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 00:00:12.038503 containerd[1922]: time="2025-05-14T00:00:12.038457352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:00:12.040683 containerd[1922]: time="2025-05-14T00:00:12.040641383Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:12.042094 containerd[1922]: time="2025-05-14T00:00:12.042020627Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:12.043004 containerd[1922]: time="2025-05-14T00:00:12.042935025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:00:12.045471 containerd[1922]: time="2025-05-14T00:00:12.044558972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:12.045471 containerd[1922]: time="2025-05-14T00:00:12.045319690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.343773ms" May 14 00:00:12.046028 containerd[1922]: time="2025-05-14T00:00:12.045995142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.832756ms" May 14 00:00:12.047570 containerd[1922]: time="2025-05-14T00:00:12.047519321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.388532ms" May 14 00:00:12.108014 kubelet[2955]: W0514 00:00:12.107928 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.108014 kubelet[2955]: E0514 00:00:12.107991 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.196152 containerd[1922]: time="2025-05-14T00:00:12.195982710Z" level=info msg="connecting to shim 24c6b5ff2c2f648914b828858fb34e1597e118e905e66ec7c57b6ec6fece8cca" address="unix:///run/containerd/s/9691ea09ee68d6645c721d9a183c47dad1694b324aba795153ce5889dc72e8f6" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:12.199369 containerd[1922]: time="2025-05-14T00:00:12.199222675Z" level=info msg="connecting to shim 7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868" address="unix:///run/containerd/s/c178f6f79d019699ed338ed44cf0ba8f06a8f4c103edeb865855ca407b62b128" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:12.203525 containerd[1922]: time="2025-05-14T00:00:12.202923968Z" level=info msg="connecting to shim f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329" address="unix:///run/containerd/s/afb128c8cba8e9a0e5afb4594b9ecd7e567646b244365f8966b3e6c2edf5504e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:12.266730 kubelet[2955]: W0514 00:00:12.265377 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.266730 kubelet[2955]: E0514 00:00:12.265423 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.310048 systemd[1]: Started cri-containerd-24c6b5ff2c2f648914b828858fb34e1597e118e905e66ec7c57b6ec6fece8cca.scope - libcontainer container 24c6b5ff2c2f648914b828858fb34e1597e118e905e66ec7c57b6ec6fece8cca. May 14 00:00:12.312466 systemd[1]: Started cri-containerd-7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868.scope - libcontainer container 7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868. May 14 00:00:12.313642 systemd[1]: Started cri-containerd-f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329.scope - libcontainer container f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329. May 14 00:00:12.316650 kubelet[2955]: W0514 00:00:12.316472 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-86&limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.316650 kubelet[2955]: E0514 00:00:12.316625 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-86&limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.403414 containerd[1922]: time="2025-05-14T00:00:12.402592305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-86,Uid:c5d84278d0c3ad09d0b8c07b142c94c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c6b5ff2c2f648914b828858fb34e1597e118e905e66ec7c57b6ec6fece8cca\"" May 14 00:00:12.403531 kubelet[2955]: W0514 00:00:12.403077 2955 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.403531 kubelet[2955]: E0514 00:00:12.403145 2955 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:12.418316 containerd[1922]: time="2025-05-14T00:00:12.416409488Z" level=info msg="CreateContainer within sandbox \"24c6b5ff2c2f648914b828858fb34e1597e118e905e66ec7c57b6ec6fece8cca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:00:12.449055 containerd[1922]: time="2025-05-14T00:00:12.449007212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-86,Uid:1cc34decbf7da5ffc50614924fb724e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868\"" May 14 00:00:12.455314 containerd[1922]: time="2025-05-14T00:00:12.455216058Z" level=info msg="CreateContainer within sandbox \"7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:00:12.457421 containerd[1922]: time="2025-05-14T00:00:12.457384061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-86,Uid:7d263c36c3e83ed2446df7a66f76b7c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329\"" May 14 00:00:12.459423 containerd[1922]: time="2025-05-14T00:00:12.459383494Z" level=info msg="Container c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:12.461867 containerd[1922]: time="2025-05-14T00:00:12.461760767Z" level=info msg="CreateContainer within sandbox \"f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:00:12.469674 containerd[1922]: time="2025-05-14T00:00:12.469628448Z" level=info msg="Container ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:12.488394 containerd[1922]: time="2025-05-14T00:00:12.488344917Z" level=info msg="CreateContainer within sandbox \"24c6b5ff2c2f648914b828858fb34e1597e118e905e66ec7c57b6ec6fece8cca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d\"" May 14 00:00:12.489109 containerd[1922]: time="2025-05-14T00:00:12.489084925Z" level=info msg="StartContainer for \"c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d\"" May 14 00:00:12.494421 containerd[1922]: time="2025-05-14T00:00:12.494387967Z" level=info msg="Container 8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:12.495478 containerd[1922]: time="2025-05-14T00:00:12.495055406Z" level=info msg="connecting to shim c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d" address="unix:///run/containerd/s/9691ea09ee68d6645c721d9a183c47dad1694b324aba795153ce5889dc72e8f6" protocol=ttrpc version=3 May 14 00:00:12.501604 containerd[1922]: time="2025-05-14T00:00:12.501567632Z" level=info msg="CreateContainer within sandbox \"7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\"" May 14 00:00:12.502287 containerd[1922]: time="2025-05-14T00:00:12.502256773Z" level=info msg="StartContainer for \"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\"" May 14 00:00:12.503219 containerd[1922]: time="2025-05-14T00:00:12.503190486Z" level=info msg="connecting to shim ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22" address="unix:///run/containerd/s/c178f6f79d019699ed338ed44cf0ba8f06a8f4c103edeb865855ca407b62b128" protocol=ttrpc version=3 May 14 00:00:12.509587 containerd[1922]: time="2025-05-14T00:00:12.509478938Z" level=info msg="CreateContainer within sandbox \"f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\"" May 14 00:00:12.510367 containerd[1922]: time="2025-05-14T00:00:12.510338422Z" level=info msg="StartContainer for \"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\"" May 14 00:00:12.512446 containerd[1922]: time="2025-05-14T00:00:12.512412773Z" level=info msg="connecting to shim 8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609" address="unix:///run/containerd/s/afb128c8cba8e9a0e5afb4594b9ecd7e567646b244365f8966b3e6c2edf5504e" protocol=ttrpc version=3 May 14 00:00:12.519934 systemd[1]: Started cri-containerd-c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d.scope - libcontainer container c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d. May 14 00:00:12.520645 kubelet[2955]: E0514 00:00:12.520450 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-86?timeout=10s\": dial tcp 172.31.19.86:6443: connect: connection refused" interval="1.6s" May 14 00:00:12.539966 systemd[1]: Started cri-containerd-8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609.scope - libcontainer container 8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609. May 14 00:00:12.551705 systemd[1]: Started cri-containerd-ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22.scope - libcontainer container ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22. May 14 00:00:12.592943 containerd[1922]: time="2025-05-14T00:00:12.592831274Z" level=info msg="StartContainer for \"c45a8004b4f5798e246e8721e2ed7624c14871e90554bcf7162d18fcc5ba503d\" returns successfully" May 14 00:00:12.623727 kubelet[2955]: I0514 00:00:12.623265 2955 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-86" May 14 00:00:12.623727 kubelet[2955]: E0514 00:00:12.623684 2955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.86:6443/api/v1/nodes\": dial tcp 172.31.19.86:6443: connect: connection refused" node="ip-172-31-19-86" May 14 00:00:12.639670 containerd[1922]: time="2025-05-14T00:00:12.639595377Z" level=info msg="StartContainer for \"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\" returns successfully" May 14 00:00:12.652715 containerd[1922]: time="2025-05-14T00:00:12.652666088Z" level=info msg="StartContainer for \"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\" returns successfully" May 14 00:00:13.169412 kubelet[2955]: E0514 00:00:13.169377 2955 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.86:6443: connect: connection refused May 14 00:00:14.228214 kubelet[2955]: I0514 00:00:14.228180 2955 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-86" May 14 00:00:15.423248 kubelet[2955]: E0514 00:00:15.422709 2955 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-86\" not found" node="ip-172-31-19-86" May 14 00:00:15.452314 kubelet[2955]: E0514 00:00:15.452217 2955 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-86.183f3bb0d3e6762c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-86,UID:ip-172-31-19-86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-86,},FirstTimestamp:2025-05-14 00:00:11.09842078 +0000 UTC m=+0.770118597,LastTimestamp:2025-05-14 00:00:11.09842078 +0000 UTC m=+0.770118597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-86,}" May 14 00:00:15.506360 kubelet[2955]: E0514 00:00:15.506257 2955 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-86.183f3bb0d51e3a8c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-86,UID:ip-172-31-19-86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-19-86,},FirstTimestamp:2025-05-14 00:00:11.118852748 +0000 UTC m=+0.790550566,LastTimestamp:2025-05-14 00:00:11.118852748 +0000 UTC m=+0.790550566,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-86,}" May 14 00:00:15.562579 kubelet[2955]: E0514 00:00:15.561019 2955 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-86.183f3bb0d79a5bea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-86,UID:ip-172-31-19-86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-19-86 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-19-86,},FirstTimestamp:2025-05-14 00:00:11.160542186 +0000 UTC m=+0.832240007,LastTimestamp:2025-05-14 00:00:11.160542186 +0000 UTC m=+0.832240007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-86,}" May 14 00:00:15.572301 kubelet[2955]: I0514 00:00:15.572188 2955 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-86" May 14 00:00:16.097450 kubelet[2955]: I0514 00:00:16.097408 2955 apiserver.go:52] "Watching apiserver" May 14 00:00:16.115030 kubelet[2955]: I0514 00:00:16.114966 2955 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:17.509782 update_engine[1898]: I20250514 00:00:17.509677 1898 update_attempter.cc:509] Updating boot flags... May 14 00:00:17.575800 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3238) May 14 00:00:17.794522 systemd[1]: Reload requested from client PID 3329 ('systemctl') (unit session-9.scope)... May 14 00:00:17.794915 systemd[1]: Reloading... May 14 00:00:17.802799 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3241) May 14 00:00:18.050953 zram_generator::config[3455]: No configuration found. May 14 00:00:18.217549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:18.354658 systemd[1]: Reloading finished in 559 ms. May 14 00:00:18.445187 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:18.467503 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:18.469423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:18.469851 systemd[1]: kubelet.service: Consumed 1.208s CPU time, 112.1M memory peak. May 14 00:00:18.476183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:18.731299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:18.744234 (kubelet)[3512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:18.817617 kubelet[3512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:18.817617 kubelet[3512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:18.817617 kubelet[3512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:18.821313 kubelet[3512]: I0514 00:00:18.821229 3512 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:18.831936 kubelet[3512]: I0514 00:00:18.831330 3512 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:18.832280 kubelet[3512]: I0514 00:00:18.832124 3512 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:18.832798 kubelet[3512]: I0514 00:00:18.832654 3512 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:18.835027 kubelet[3512]: I0514 00:00:18.834961 3512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:00:18.837555 kubelet[3512]: I0514 00:00:18.837167 3512 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:18.856467 kubelet[3512]: I0514 00:00:18.856426 3512 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:18.856790 kubelet[3512]: I0514 00:00:18.856721 3512 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:18.857799 kubelet[3512]: I0514 00:00:18.856761 3512 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:18.857799 kubelet[3512]: I0514 00:00:18.857045 3512 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:18.857799 kubelet[3512]: I0514 00:00:18.857063 3512 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:18.859018 kubelet[3512]: I0514 00:00:18.858859 3512 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:18.859157 kubelet[3512]: I0514 00:00:18.859146 3512 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:18.861836 kubelet[3512]: I0514 00:00:18.861812 3512 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:18.862190 kubelet[3512]: I0514 00:00:18.861969 3512 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:18.862190 kubelet[3512]: I0514 00:00:18.862083 3512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:18.873613 kubelet[3512]: I0514 00:00:18.873190 3512 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:00:18.873613 kubelet[3512]: I0514 00:00:18.873406 3512 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:18.876072 kubelet[3512]: I0514 00:00:18.875974 3512 server.go:1264] "Started kubelet" May 14 00:00:18.876201 kubelet[3512]: I0514 00:00:18.876050 3512 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:18.876787 kubelet[3512]: I0514 00:00:18.876493 3512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:18.876992 kubelet[3512]: I0514 00:00:18.876979 3512 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:18.879792 kubelet[3512]: I0514 00:00:18.879069 3512 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:18.887125 kubelet[3512]: I0514 00:00:18.886155 3512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:18.899863 kubelet[3512]: I0514 00:00:18.897843 3512 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:18.899863 kubelet[3512]: I0514 00:00:18.897940 3512 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:18.899863 kubelet[3512]: I0514 00:00:18.898241 3512 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:18.902407 kubelet[3512]: E0514 00:00:18.902374 3512 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:00:18.909599 kubelet[3512]: I0514 00:00:18.909571 3512 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:18.913786 kubelet[3512]: I0514 00:00:18.910795 3512 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:18.913786 kubelet[3512]: I0514 00:00:18.910924 3512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:18.920061 kubelet[3512]: I0514 00:00:18.920011 3512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:18.922375 kubelet[3512]: I0514 00:00:18.922338 3512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:18.922375 kubelet[3512]: I0514 00:00:18.922380 3512 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:18.922615 kubelet[3512]: I0514 00:00:18.922402 3512 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:18.922615 kubelet[3512]: E0514 00:00:18.922459 3512 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:18.997278 kubelet[3512]: I0514 00:00:18.996807 3512 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:18.997278 kubelet[3512]: I0514 00:00:18.996832 3512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:18.997278 kubelet[3512]: I0514 00:00:18.996856 3512 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:18.997278 kubelet[3512]: I0514 00:00:18.997039 3512 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:00:18.997278 kubelet[3512]: I0514 00:00:18.997048 3512 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:00:18.997278 kubelet[3512]: I0514 00:00:18.997064 3512 policy_none.go:49] "None policy: Start" May 14 00:00:18.998310 kubelet[3512]: I0514 00:00:18.998243 3512 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:18.998310 kubelet[3512]: I0514 00:00:18.998271 3512 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:18.998777 kubelet[3512]: I0514 00:00:18.998449 3512 state_mem.go:75] "Updated machine memory state" May 14 00:00:19.007095 kubelet[3512]: I0514 00:00:19.007073 3512 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:19.008308 kubelet[3512]: I0514 00:00:19.008258 3512 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:19.008555 kubelet[3512]: I0514 00:00:19.008539 3512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:19.024035 kubelet[3512]: I0514 00:00:19.023939 3512 topology_manager.go:215] "Topology Admit Handler" podUID="7d263c36c3e83ed2446df7a66f76b7c5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-86" May 14 00:00:19.024035 kubelet[3512]: I0514 00:00:19.024023 3512 topology_manager.go:215] "Topology Admit Handler" podUID="c5d84278d0c3ad09d0b8c07b142c94c1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-86" May 14 00:00:19.024211 kubelet[3512]: I0514 00:00:19.024071 3512 topology_manager.go:215] "Topology Admit Handler" podUID="1cc34decbf7da5ffc50614924fb724e0" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-86" May 14 00:00:19.039032 kubelet[3512]: E0514 00:00:19.038999 3512 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-86\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:19.099027 kubelet[3512]: I0514 00:00:19.098996 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5d84278d0c3ad09d0b8c07b142c94c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-86\" (UID: \"c5d84278d0c3ad09d0b8c07b142c94c1\") " pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:19.099238 kubelet[3512]: I0514 00:00:19.099225 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:19.099351 kubelet[3512]: I0514 00:00:19.099340 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:19.099486 kubelet[3512]: I0514 00:00:19.099458 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:19.099486 kubelet[3512]: I0514 00:00:19.099486 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:19.099610 kubelet[3512]: I0514 00:00:19.099505 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d263c36c3e83ed2446df7a66f76b7c5-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-86\" (UID: \"7d263c36c3e83ed2446df7a66f76b7c5\") " pod="kube-system/kube-scheduler-ip-172-31-19-86" May 14 00:00:19.099610 kubelet[3512]: I0514 00:00:19.099520 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5d84278d0c3ad09d0b8c07b142c94c1-ca-certs\") pod \"kube-apiserver-ip-172-31-19-86\" (UID: \"c5d84278d0c3ad09d0b8c07b142c94c1\") " pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:19.099610 kubelet[3512]: I0514 00:00:19.099536 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5d84278d0c3ad09d0b8c07b142c94c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-86\" (UID: \"c5d84278d0c3ad09d0b8c07b142c94c1\") " pod="kube-system/kube-apiserver-ip-172-31-19-86" May 14 00:00:19.099610 kubelet[3512]: I0514 00:00:19.099550 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cc34decbf7da5ffc50614924fb724e0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-86\" (UID: \"1cc34decbf7da5ffc50614924fb724e0\") " pod="kube-system/kube-controller-manager-ip-172-31-19-86" May 14 00:00:19.130197 kubelet[3512]: I0514 00:00:19.130139 3512 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-86" May 14 00:00:19.142938 kubelet[3512]: I0514 00:00:19.142904 3512 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-86" May 14 00:00:19.143052 kubelet[3512]: I0514 00:00:19.142989 3512 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-86" May 14 00:00:19.872788 kubelet[3512]: I0514 00:00:19.872327 3512 apiserver.go:52] "Watching apiserver" May 14 00:00:19.898196 kubelet[3512]: I0514 00:00:19.898123 3512 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:20.065629 kubelet[3512]: I0514 00:00:20.065434 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-86" podStartSLOduration=1.065392976 podStartE2EDuration="1.065392976s" podCreationTimestamp="2025-05-14 00:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:20.046006867 +0000 UTC m=+1.294784446" watchObservedRunningTime="2025-05-14 00:00:20.065392976 +0000 UTC m=+1.314170556" May 14 00:00:20.088385 kubelet[3512]: I0514 00:00:20.088189 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-86" podStartSLOduration=1.08816597 podStartE2EDuration="1.08816597s" podCreationTimestamp="2025-05-14 00:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:20.067266937 +0000 UTC m=+1.316044516" watchObservedRunningTime="2025-05-14 00:00:20.08816597 +0000 UTC m=+1.336943549" May 14 00:00:22.267952 kubelet[3512]: I0514 00:00:22.267872 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-86" podStartSLOduration=4.267853027 podStartE2EDuration="4.267853027s" podCreationTimestamp="2025-05-14 00:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:20.090013069 +0000 UTC m=+1.338790629" watchObservedRunningTime="2025-05-14 00:00:22.267853027 +0000 UTC m=+3.516630585" May 14 00:00:24.306080 sudo[2275]: pam_unix(sudo:session): session closed for user root May 14 00:00:24.328378 sshd[2272]: Connection closed by 147.75.109.163 port 32814 May 14 00:00:24.329333 sshd-session[2269]: pam_unix(sshd:session): session closed for user core May 14 00:00:24.334186 systemd[1]: sshd@8-172.31.19.86:22-147.75.109.163:32814.service: Deactivated successfully. May 14 00:00:24.337423 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:00:24.337655 systemd[1]: session-9.scope: Consumed 4.672s CPU time, 184.4M memory peak. May 14 00:00:24.339504 systemd-logind[1897]: Session 9 logged out. Waiting for processes to exit. May 14 00:00:24.340740 systemd-logind[1897]: Removed session 9. May 14 00:00:32.526635 kubelet[3512]: I0514 00:00:32.526588 3512 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:00:32.528364 containerd[1922]: time="2025-05-14T00:00:32.527566881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:00:32.528804 kubelet[3512]: I0514 00:00:32.527853 3512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:00:33.505825 kubelet[3512]: I0514 00:00:33.505748 3512 topology_manager.go:215] "Topology Admit Handler" podUID="ef61707d-d957-4c6d-9701-b5f8935bc8cb" podNamespace="kube-system" podName="kube-proxy-tqcxj" May 14 00:00:33.515483 systemd[1]: Created slice kubepods-besteffort-podef61707d_d957_4c6d_9701_b5f8935bc8cb.slice - libcontainer container kubepods-besteffort-podef61707d_d957_4c6d_9701_b5f8935bc8cb.slice. May 14 00:00:33.591932 kubelet[3512]: I0514 00:00:33.590933 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef61707d-d957-4c6d-9701-b5f8935bc8cb-kube-proxy\") pod \"kube-proxy-tqcxj\" (UID: \"ef61707d-d957-4c6d-9701-b5f8935bc8cb\") " pod="kube-system/kube-proxy-tqcxj" May 14 00:00:33.591932 kubelet[3512]: I0514 00:00:33.590973 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef61707d-d957-4c6d-9701-b5f8935bc8cb-xtables-lock\") pod \"kube-proxy-tqcxj\" (UID: \"ef61707d-d957-4c6d-9701-b5f8935bc8cb\") " pod="kube-system/kube-proxy-tqcxj" May 14 00:00:33.591932 kubelet[3512]: I0514 00:00:33.590995 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7qf\" (UniqueName: \"kubernetes.io/projected/ef61707d-d957-4c6d-9701-b5f8935bc8cb-kube-api-access-vr7qf\") pod \"kube-proxy-tqcxj\" (UID: \"ef61707d-d957-4c6d-9701-b5f8935bc8cb\") " pod="kube-system/kube-proxy-tqcxj" May 14 00:00:33.591932 kubelet[3512]: I0514 00:00:33.591016 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef61707d-d957-4c6d-9701-b5f8935bc8cb-lib-modules\") pod \"kube-proxy-tqcxj\" (UID: \"ef61707d-d957-4c6d-9701-b5f8935bc8cb\") " pod="kube-system/kube-proxy-tqcxj" May 14 00:00:33.631802 kubelet[3512]: I0514 00:00:33.630774 3512 topology_manager.go:215] "Topology Admit Handler" podUID="614ee11d-8fbd-4970-afa9-fa351d2e76f7" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-rxjvd" May 14 00:00:33.643659 systemd[1]: Created slice kubepods-besteffort-pod614ee11d_8fbd_4970_afa9_fa351d2e76f7.slice - libcontainer container kubepods-besteffort-pod614ee11d_8fbd_4970_afa9_fa351d2e76f7.slice. May 14 00:00:33.691532 kubelet[3512]: I0514 00:00:33.691482 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vckfr\" (UniqueName: \"kubernetes.io/projected/614ee11d-8fbd-4970-afa9-fa351d2e76f7-kube-api-access-vckfr\") pod \"tigera-operator-797db67f8-rxjvd\" (UID: \"614ee11d-8fbd-4970-afa9-fa351d2e76f7\") " pod="tigera-operator/tigera-operator-797db67f8-rxjvd" May 14 00:00:33.691532 kubelet[3512]: I0514 00:00:33.691540 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/614ee11d-8fbd-4970-afa9-fa351d2e76f7-var-lib-calico\") pod \"tigera-operator-797db67f8-rxjvd\" (UID: \"614ee11d-8fbd-4970-afa9-fa351d2e76f7\") " pod="tigera-operator/tigera-operator-797db67f8-rxjvd" May 14 00:00:33.822804 containerd[1922]: time="2025-05-14T00:00:33.822671936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqcxj,Uid:ef61707d-d957-4c6d-9701-b5f8935bc8cb,Namespace:kube-system,Attempt:0,}" May 14 00:00:33.866399 containerd[1922]: time="2025-05-14T00:00:33.866346162Z" level=info msg="connecting to shim a3d3a61645f1ec8756bfc5d474fcbf9563d1f444202ed9983410a53031ea1138" address="unix:///run/containerd/s/76d970f9ec896d46ed42d0fd83ad8d904a8d5b7ce08bbdeebe6eed15703c3a12" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:33.896058 systemd[1]: Started cri-containerd-a3d3a61645f1ec8756bfc5d474fcbf9563d1f444202ed9983410a53031ea1138.scope - libcontainer container a3d3a61645f1ec8756bfc5d474fcbf9563d1f444202ed9983410a53031ea1138. May 14 00:00:33.929260 containerd[1922]: time="2025-05-14T00:00:33.929192272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqcxj,Uid:ef61707d-d957-4c6d-9701-b5f8935bc8cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3d3a61645f1ec8756bfc5d474fcbf9563d1f444202ed9983410a53031ea1138\"" May 14 00:00:33.937190 containerd[1922]: time="2025-05-14T00:00:33.937144911Z" level=info msg="CreateContainer within sandbox \"a3d3a61645f1ec8756bfc5d474fcbf9563d1f444202ed9983410a53031ea1138\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:00:33.948667 containerd[1922]: time="2025-05-14T00:00:33.948577162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-rxjvd,Uid:614ee11d-8fbd-4970-afa9-fa351d2e76f7,Namespace:tigera-operator,Attempt:0,}" May 14 00:00:33.959816 containerd[1922]: time="2025-05-14T00:00:33.958359513Z" level=info msg="Container 1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:33.978434 containerd[1922]: time="2025-05-14T00:00:33.978384245Z" level=info msg="CreateContainer within sandbox \"a3d3a61645f1ec8756bfc5d474fcbf9563d1f444202ed9983410a53031ea1138\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571\"" May 14 00:00:33.980482 containerd[1922]: time="2025-05-14T00:00:33.980426583Z" level=info msg="StartContainer for \"1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571\"" May 14 00:00:33.981717 containerd[1922]: time="2025-05-14T00:00:33.981679701Z" level=info msg="connecting to shim 1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571" address="unix:///run/containerd/s/76d970f9ec896d46ed42d0fd83ad8d904a8d5b7ce08bbdeebe6eed15703c3a12" protocol=ttrpc version=3 May 14 00:00:34.009100 systemd[1]: Started cri-containerd-1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571.scope - libcontainer container 1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571. May 14 00:00:34.012402 containerd[1922]: time="2025-05-14T00:00:34.012237961Z" level=info msg="connecting to shim fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab" address="unix:///run/containerd/s/6e08a748bb191fa4973989f46cef1ad9d62c9770777e80ffab0146fa939f2bdf" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:34.067232 systemd[1]: Started cri-containerd-fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab.scope - libcontainer container fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab. May 14 00:00:34.081107 containerd[1922]: time="2025-05-14T00:00:34.080984549Z" level=info msg="StartContainer for \"1f361ea7f68c85e54542871670c082beaa8b4b884c6d9258ebaf97c4c6c87571\" returns successfully" May 14 00:00:34.131979 containerd[1922]: time="2025-05-14T00:00:34.131926298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-rxjvd,Uid:614ee11d-8fbd-4970-afa9-fa351d2e76f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab\"" May 14 00:00:34.148011 containerd[1922]: time="2025-05-14T00:00:34.147976821Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 00:00:34.717611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836822358.mount: Deactivated successfully. May 14 00:00:35.030434 kubelet[3512]: I0514 00:00:35.030264 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqcxj" podStartSLOduration=2.030235466 podStartE2EDuration="2.030235466s" podCreationTimestamp="2025-05-14 00:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:35.02978778 +0000 UTC m=+16.278565351" watchObservedRunningTime="2025-05-14 00:00:35.030235466 +0000 UTC m=+16.279013042" May 14 00:00:35.934750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659680429.mount: Deactivated successfully. May 14 00:00:36.676378 containerd[1922]: time="2025-05-14T00:00:36.676302809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:36.677301 containerd[1922]: time="2025-05-14T00:00:36.677253077Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 00:00:36.679940 containerd[1922]: time="2025-05-14T00:00:36.679897641Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:36.682443 containerd[1922]: time="2025-05-14T00:00:36.682378364Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:36.683437 containerd[1922]: time="2025-05-14T00:00:36.682986706Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.534971493s" May 14 00:00:36.683437 containerd[1922]: time="2025-05-14T00:00:36.683019093Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 00:00:36.692371 containerd[1922]: time="2025-05-14T00:00:36.692330346Z" level=info msg="CreateContainer within sandbox \"fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 00:00:36.752038 containerd[1922]: time="2025-05-14T00:00:36.749124595Z" level=info msg="Container 950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:36.766920 containerd[1922]: time="2025-05-14T00:00:36.766869233Z" level=info msg="CreateContainer within sandbox \"fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\"" May 14 00:00:36.767582 containerd[1922]: time="2025-05-14T00:00:36.767545577Z" level=info msg="StartContainer for \"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\"" May 14 00:00:36.768556 containerd[1922]: time="2025-05-14T00:00:36.768513205Z" level=info msg="connecting to shim 950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5" address="unix:///run/containerd/s/6e08a748bb191fa4973989f46cef1ad9d62c9770777e80ffab0146fa939f2bdf" protocol=ttrpc version=3 May 14 00:00:36.795979 systemd[1]: Started cri-containerd-950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5.scope - libcontainer container 950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5. May 14 00:00:36.838662 containerd[1922]: time="2025-05-14T00:00:36.838204481Z" level=info msg="StartContainer for \"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\" returns successfully" May 14 00:00:37.044741 kubelet[3512]: I0514 00:00:37.044571 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-rxjvd" podStartSLOduration=1.5022621790000001 podStartE2EDuration="4.043187837s" podCreationTimestamp="2025-05-14 00:00:33 +0000 UTC" firstStartedPulling="2025-05-14 00:00:34.147487288 +0000 UTC m=+15.396264847" lastFinishedPulling="2025-05-14 00:00:36.688412948 +0000 UTC m=+17.937190505" observedRunningTime="2025-05-14 00:00:37.043066972 +0000 UTC m=+18.291844551" watchObservedRunningTime="2025-05-14 00:00:37.043187837 +0000 UTC m=+18.291965413" May 14 00:00:40.322581 kubelet[3512]: I0514 00:00:40.322516 3512 topology_manager.go:215] "Topology Admit Handler" podUID="7a86b7eb-b396-40d5-bb18-d3582f4d3989" podNamespace="calico-system" podName="calico-typha-7557475897-m4mv4" May 14 00:00:40.352262 systemd[1]: Created slice kubepods-besteffort-pod7a86b7eb_b396_40d5_bb18_d3582f4d3989.slice - libcontainer container kubepods-besteffort-pod7a86b7eb_b396_40d5_bb18_d3582f4d3989.slice. May 14 00:00:40.356602 kubelet[3512]: I0514 00:00:40.356558 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzxq\" (UniqueName: \"kubernetes.io/projected/7a86b7eb-b396-40d5-bb18-d3582f4d3989-kube-api-access-4xzxq\") pod \"calico-typha-7557475897-m4mv4\" (UID: \"7a86b7eb-b396-40d5-bb18-d3582f4d3989\") " pod="calico-system/calico-typha-7557475897-m4mv4" May 14 00:00:40.356717 kubelet[3512]: I0514 00:00:40.356626 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a86b7eb-b396-40d5-bb18-d3582f4d3989-tigera-ca-bundle\") pod \"calico-typha-7557475897-m4mv4\" (UID: \"7a86b7eb-b396-40d5-bb18-d3582f4d3989\") " pod="calico-system/calico-typha-7557475897-m4mv4" May 14 00:00:40.356717 kubelet[3512]: I0514 00:00:40.356651 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7a86b7eb-b396-40d5-bb18-d3582f4d3989-typha-certs\") pod \"calico-typha-7557475897-m4mv4\" (UID: \"7a86b7eb-b396-40d5-bb18-d3582f4d3989\") " pod="calico-system/calico-typha-7557475897-m4mv4" May 14 00:00:40.533135 kubelet[3512]: I0514 00:00:40.533078 3512 topology_manager.go:215] "Topology Admit Handler" podUID="f43f496d-2aab-4f69-a9c7-34444319a368" podNamespace="calico-system" podName="calico-node-tfwm7" May 14 00:00:40.551224 systemd[1]: Created slice kubepods-besteffort-podf43f496d_2aab_4f69_a9c7_34444319a368.slice - libcontainer container kubepods-besteffort-podf43f496d_2aab_4f69_a9c7_34444319a368.slice. May 14 00:00:40.557469 kubelet[3512]: I0514 00:00:40.557439 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-flexvol-driver-host\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557607 kubelet[3512]: I0514 00:00:40.557479 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f43f496d-2aab-4f69-a9c7-34444319a368-tigera-ca-bundle\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557607 kubelet[3512]: I0514 00:00:40.557504 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-var-run-calico\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557607 kubelet[3512]: I0514 00:00:40.557525 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-cni-bin-dir\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557607 kubelet[3512]: I0514 00:00:40.557573 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-policysync\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557607 kubelet[3512]: I0514 00:00:40.557599 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-var-lib-calico\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557858 kubelet[3512]: I0514 00:00:40.557626 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-cni-net-dir\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557858 kubelet[3512]: I0514 00:00:40.557651 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-cni-log-dir\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557858 kubelet[3512]: I0514 00:00:40.557676 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f43f496d-2aab-4f69-a9c7-34444319a368-node-certs\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557858 kubelet[3512]: I0514 00:00:40.557700 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-xtables-lock\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.557858 kubelet[3512]: I0514 00:00:40.557724 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f43f496d-2aab-4f69-a9c7-34444319a368-lib-modules\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.558173 kubelet[3512]: I0514 00:00:40.557759 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnqt\" (UniqueName: \"kubernetes.io/projected/f43f496d-2aab-4f69-a9c7-34444319a368-kube-api-access-5bnqt\") pod \"calico-node-tfwm7\" (UID: \"f43f496d-2aab-4f69-a9c7-34444319a368\") " pod="calico-system/calico-node-tfwm7" May 14 00:00:40.652427 kubelet[3512]: I0514 00:00:40.652361 3512 topology_manager.go:215] "Topology Admit Handler" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" podNamespace="calico-system" podName="csi-node-driver-kcf7w" May 14 00:00:40.652971 kubelet[3512]: E0514 00:00:40.652937 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcf7w" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" May 14 00:00:40.660213 kubelet[3512]: E0514 00:00:40.660170 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.660213 kubelet[3512]: W0514 00:00:40.660201 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.660409 kubelet[3512]: E0514 00:00:40.660226 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.661109 kubelet[3512]: E0514 00:00:40.660886 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.661109 kubelet[3512]: W0514 00:00:40.660905 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.661109 kubelet[3512]: E0514 00:00:40.660925 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.661623 kubelet[3512]: E0514 00:00:40.661428 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.661623 kubelet[3512]: W0514 00:00:40.661444 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.661623 kubelet[3512]: E0514 00:00:40.661459 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.661796 kubelet[3512]: E0514 00:00:40.661716 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.661796 kubelet[3512]: W0514 00:00:40.661727 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.661796 kubelet[3512]: E0514 00:00:40.661740 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.662870 kubelet[3512]: E0514 00:00:40.662298 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.662870 kubelet[3512]: W0514 00:00:40.662310 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.662870 kubelet[3512]: E0514 00:00:40.662324 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.663063 kubelet[3512]: E0514 00:00:40.663044 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.663121 kubelet[3512]: W0514 00:00:40.663063 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.663121 kubelet[3512]: E0514 00:00:40.663079 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.665837 kubelet[3512]: E0514 00:00:40.665551 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.665837 kubelet[3512]: W0514 00:00:40.665571 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.665837 kubelet[3512]: E0514 00:00:40.665591 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.666526 kubelet[3512]: E0514 00:00:40.666503 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.666526 kubelet[3512]: W0514 00:00:40.666523 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.666630 kubelet[3512]: E0514 00:00:40.666542 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.673970 kubelet[3512]: E0514 00:00:40.673896 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.673970 kubelet[3512]: W0514 00:00:40.673930 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.673970 kubelet[3512]: E0514 00:00:40.673958 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.678289 kubelet[3512]: E0514 00:00:40.678245 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.678289 kubelet[3512]: W0514 00:00:40.678273 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.678478 kubelet[3512]: E0514 00:00:40.678300 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.680673 kubelet[3512]: E0514 00:00:40.680092 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.680673 kubelet[3512]: W0514 00:00:40.680114 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.680673 kubelet[3512]: E0514 00:00:40.680141 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.683790 kubelet[3512]: E0514 00:00:40.682736 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.683790 kubelet[3512]: W0514 00:00:40.682758 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.683790 kubelet[3512]: E0514 00:00:40.682819 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.685671 kubelet[3512]: E0514 00:00:40.684906 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.685671 kubelet[3512]: W0514 00:00:40.684928 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.685671 kubelet[3512]: E0514 00:00:40.684951 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.695208 containerd[1922]: time="2025-05-14T00:00:40.694657629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7557475897-m4mv4,Uid:7a86b7eb-b396-40d5-bb18-d3582f4d3989,Namespace:calico-system,Attempt:0,}" May 14 00:00:40.715195 kubelet[3512]: E0514 00:00:40.715153 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.715195 kubelet[3512]: W0514 00:00:40.715188 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.717126 kubelet[3512]: E0514 00:00:40.716936 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.748414 kubelet[3512]: E0514 00:00:40.748367 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.748414 kubelet[3512]: W0514 00:00:40.748409 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.748414 kubelet[3512]: E0514 00:00:40.748440 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.750373 kubelet[3512]: E0514 00:00:40.750318 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.750373 kubelet[3512]: W0514 00:00:40.750344 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.750373 kubelet[3512]: E0514 00:00:40.750372 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.752081 kubelet[3512]: E0514 00:00:40.752056 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.752081 kubelet[3512]: W0514 00:00:40.752081 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.752361 kubelet[3512]: E0514 00:00:40.752105 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.754627 kubelet[3512]: E0514 00:00:40.754597 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.754627 kubelet[3512]: W0514 00:00:40.754625 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.755831 kubelet[3512]: E0514 00:00:40.754651 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.756335 kubelet[3512]: E0514 00:00:40.756306 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.756439 kubelet[3512]: W0514 00:00:40.756334 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.756439 kubelet[3512]: E0514 00:00:40.756367 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.758011 kubelet[3512]: E0514 00:00:40.757982 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.758011 kubelet[3512]: W0514 00:00:40.758008 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.758154 kubelet[3512]: E0514 00:00:40.758032 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.760460 kubelet[3512]: E0514 00:00:40.760431 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.760460 kubelet[3512]: W0514 00:00:40.760457 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.760625 kubelet[3512]: E0514 00:00:40.760481 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.762786 kubelet[3512]: E0514 00:00:40.761989 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.762786 kubelet[3512]: W0514 00:00:40.762008 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.762786 kubelet[3512]: E0514 00:00:40.762030 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.762786 kubelet[3512]: E0514 00:00:40.762358 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.762786 kubelet[3512]: W0514 00:00:40.762369 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.762786 kubelet[3512]: E0514 00:00:40.762384 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.762786 kubelet[3512]: E0514 00:00:40.762781 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.762786 kubelet[3512]: W0514 00:00:40.762793 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.763317 kubelet[3512]: E0514 00:00:40.762808 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.765446 kubelet[3512]: E0514 00:00:40.765386 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.765446 kubelet[3512]: W0514 00:00:40.765412 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.765446 kubelet[3512]: E0514 00:00:40.765432 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.765756 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.768017 kubelet[3512]: W0514 00:00:40.765784 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.765801 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.766085 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.768017 kubelet[3512]: W0514 00:00:40.766096 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.766107 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.766314 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.768017 kubelet[3512]: W0514 00:00:40.766336 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.766349 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.768017 kubelet[3512]: E0514 00:00:40.767056 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.768454 kubelet[3512]: W0514 00:00:40.767069 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.768454 kubelet[3512]: E0514 00:00:40.767083 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.768454 kubelet[3512]: E0514 00:00:40.767726 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.768454 kubelet[3512]: W0514 00:00:40.767739 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.768454 kubelet[3512]: E0514 00:00:40.767754 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.768645 kubelet[3512]: E0514 00:00:40.768511 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.768645 kubelet[3512]: W0514 00:00:40.768522 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.768645 kubelet[3512]: E0514 00:00:40.768536 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.771229 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.773611 kubelet[3512]: W0514 00:00:40.771258 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.771280 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.771553 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.773611 kubelet[3512]: W0514 00:00:40.771566 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.771580 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.771848 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.773611 kubelet[3512]: W0514 00:00:40.771857 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.771870 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.773611 kubelet[3512]: E0514 00:00:40.772235 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.774179 kubelet[3512]: W0514 00:00:40.772249 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.774179 kubelet[3512]: E0514 00:00:40.772264 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.774179 kubelet[3512]: I0514 00:00:40.772294 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97d5l\" (UniqueName: \"kubernetes.io/projected/2a30ea9b-c110-4f18-9efa-bf49f1486efc-kube-api-access-97d5l\") pod \"csi-node-driver-kcf7w\" (UID: \"2a30ea9b-c110-4f18-9efa-bf49f1486efc\") " pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:40.774179 kubelet[3512]: E0514 00:00:40.772587 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.774179 kubelet[3512]: W0514 00:00:40.772601 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.774179 kubelet[3512]: E0514 00:00:40.772627 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.774179 kubelet[3512]: I0514 00:00:40.772655 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2a30ea9b-c110-4f18-9efa-bf49f1486efc-varrun\") pod \"csi-node-driver-kcf7w\" (UID: \"2a30ea9b-c110-4f18-9efa-bf49f1486efc\") " pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:40.774179 kubelet[3512]: E0514 00:00:40.772944 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.774511 kubelet[3512]: W0514 00:00:40.772955 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.774511 kubelet[3512]: E0514 00:00:40.773024 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.774511 kubelet[3512]: E0514 00:00:40.773452 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.774511 kubelet[3512]: W0514 00:00:40.773463 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.774511 kubelet[3512]: E0514 00:00:40.773544 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.774511 kubelet[3512]: E0514 00:00:40.774172 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.774511 kubelet[3512]: W0514 00:00:40.774186 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.774511 kubelet[3512]: E0514 00:00:40.774203 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.774511 kubelet[3512]: I0514 00:00:40.774231 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a30ea9b-c110-4f18-9efa-bf49f1486efc-kubelet-dir\") pod \"csi-node-driver-kcf7w\" (UID: \"2a30ea9b-c110-4f18-9efa-bf49f1486efc\") " pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:40.779011 kubelet[3512]: E0514 00:00:40.774903 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.779011 kubelet[3512]: W0514 00:00:40.774915 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.779011 kubelet[3512]: E0514 00:00:40.775091 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.779011 kubelet[3512]: I0514 00:00:40.775118 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a30ea9b-c110-4f18-9efa-bf49f1486efc-registration-dir\") pod \"csi-node-driver-kcf7w\" (UID: \"2a30ea9b-c110-4f18-9efa-bf49f1486efc\") " pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:40.779011 kubelet[3512]: E0514 00:00:40.775646 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.779011 kubelet[3512]: W0514 00:00:40.775657 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.779011 kubelet[3512]: E0514 00:00:40.775975 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.779011 kubelet[3512]: E0514 00:00:40.776248 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.779011 kubelet[3512]: W0514 00:00:40.776261 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.779440 kubelet[3512]: E0514 00:00:40.776429 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.779440 kubelet[3512]: E0514 00:00:40.776801 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.779440 kubelet[3512]: W0514 00:00:40.776810 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.779440 kubelet[3512]: E0514 00:00:40.776968 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.779440 kubelet[3512]: I0514 00:00:40.776996 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a30ea9b-c110-4f18-9efa-bf49f1486efc-socket-dir\") pod \"csi-node-driver-kcf7w\" (UID: \"2a30ea9b-c110-4f18-9efa-bf49f1486efc\") " pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:40.779440 kubelet[3512]: E0514 00:00:40.777419 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.779440 kubelet[3512]: W0514 00:00:40.777433 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.779680 kubelet[3512]: E0514 00:00:40.779639 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.780115 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.782535 kubelet[3512]: W0514 00:00:40.780138 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.780162 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.780619 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.782535 kubelet[3512]: W0514 00:00:40.780761 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.780807 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.781044 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.782535 kubelet[3512]: W0514 00:00:40.781054 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.781065 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.782535 kubelet[3512]: E0514 00:00:40.781370 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.783555 kubelet[3512]: W0514 00:00:40.781380 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.783555 kubelet[3512]: E0514 00:00:40.781393 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.783555 kubelet[3512]: E0514 00:00:40.781747 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.783555 kubelet[3512]: W0514 00:00:40.781757 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.783555 kubelet[3512]: E0514 00:00:40.781783 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.820309 containerd[1922]: time="2025-05-14T00:00:40.818670273Z" level=info msg="connecting to shim fa62d15dca6170435719bcf9ba5723d78f3f2669f8ea07fbc6927efa52207973" address="unix:///run/containerd/s/9c542e4c3532d6bd66f1d2fdbf8c4bca791fa79b7baa95719939d19703d805ab" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:40.855028 systemd[1]: Started cri-containerd-fa62d15dca6170435719bcf9ba5723d78f3f2669f8ea07fbc6927efa52207973.scope - libcontainer container fa62d15dca6170435719bcf9ba5723d78f3f2669f8ea07fbc6927efa52207973. May 14 00:00:40.860067 containerd[1922]: time="2025-05-14T00:00:40.860002378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfwm7,Uid:f43f496d-2aab-4f69-a9c7-34444319a368,Namespace:calico-system,Attempt:0,}" May 14 00:00:40.881690 kubelet[3512]: E0514 00:00:40.881659 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.881690 kubelet[3512]: W0514 00:00:40.881683 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.882227 kubelet[3512]: E0514 00:00:40.881708 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.883548 kubelet[3512]: E0514 00:00:40.883365 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.883548 kubelet[3512]: W0514 00:00:40.883402 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.883726 kubelet[3512]: E0514 00:00:40.883623 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.884538 kubelet[3512]: E0514 00:00:40.884132 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.884538 kubelet[3512]: W0514 00:00:40.884149 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.884538 kubelet[3512]: E0514 00:00:40.884211 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.887030 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.888651 kubelet[3512]: W0514 00:00:40.887054 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.887111 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.887628 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.888651 kubelet[3512]: W0514 00:00:40.887643 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.887689 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.888038 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.888651 kubelet[3512]: W0514 00:00:40.888051 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.888074 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.888651 kubelet[3512]: E0514 00:00:40.888312 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.889395 kubelet[3512]: W0514 00:00:40.888322 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.889395 kubelet[3512]: E0514 00:00:40.888362 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.891552 kubelet[3512]: E0514 00:00:40.891222 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.891552 kubelet[3512]: W0514 00:00:40.891242 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.892071 kubelet[3512]: E0514 00:00:40.892000 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.892591 kubelet[3512]: E0514 00:00:40.892412 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.892591 kubelet[3512]: W0514 00:00:40.892424 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.892591 kubelet[3512]: E0514 00:00:40.892456 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.893363 kubelet[3512]: E0514 00:00:40.893052 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.893363 kubelet[3512]: W0514 00:00:40.893073 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.893363 kubelet[3512]: E0514 00:00:40.893114 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.894074 kubelet[3512]: E0514 00:00:40.893934 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.894074 kubelet[3512]: W0514 00:00:40.893949 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.894074 kubelet[3512]: E0514 00:00:40.893980 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.897117 kubelet[3512]: E0514 00:00:40.896384 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.897117 kubelet[3512]: W0514 00:00:40.896402 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.897117 kubelet[3512]: E0514 00:00:40.896449 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.897117 kubelet[3512]: E0514 00:00:40.896703 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.897117 kubelet[3512]: W0514 00:00:40.896715 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.897117 kubelet[3512]: E0514 00:00:40.896747 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.897117 kubelet[3512]: E0514 00:00:40.897028 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.897117 kubelet[3512]: W0514 00:00:40.897039 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.897507 kubelet[3512]: E0514 00:00:40.897420 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.898320 kubelet[3512]: E0514 00:00:40.897695 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.898320 kubelet[3512]: W0514 00:00:40.897708 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.898320 kubelet[3512]: E0514 00:00:40.897844 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.900778 kubelet[3512]: E0514 00:00:40.898649 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.900778 kubelet[3512]: W0514 00:00:40.898663 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.900778 kubelet[3512]: E0514 00:00:40.898696 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.900778 kubelet[3512]: E0514 00:00:40.900559 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.900778 kubelet[3512]: W0514 00:00:40.900571 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.900778 kubelet[3512]: E0514 00:00:40.900730 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.901249 kubelet[3512]: E0514 00:00:40.901131 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.901249 kubelet[3512]: W0514 00:00:40.901145 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.901509 kubelet[3512]: E0514 00:00:40.901375 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.901626 kubelet[3512]: E0514 00:00:40.901613 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.901706 kubelet[3512]: W0514 00:00:40.901692 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.901812 kubelet[3512]: E0514 00:00:40.901792 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.902258 kubelet[3512]: E0514 00:00:40.902243 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.902566 kubelet[3512]: W0514 00:00:40.902425 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.904658 kubelet[3512]: E0514 00:00:40.904509 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.905084 kubelet[3512]: E0514 00:00:40.904714 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.905084 kubelet[3512]: W0514 00:00:40.904725 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.905084 kubelet[3512]: E0514 00:00:40.904752 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.905084 kubelet[3512]: E0514 00:00:40.904952 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.905084 kubelet[3512]: W0514 00:00:40.904963 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.905306 kubelet[3512]: E0514 00:00:40.905158 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.905306 kubelet[3512]: W0514 00:00:40.905167 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.905306 kubelet[3512]: E0514 00:00:40.905181 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.905444 kubelet[3512]: E0514 00:00:40.905429 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.905444 kubelet[3512]: W0514 00:00:40.905438 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.905518 kubelet[3512]: E0514 00:00:40.905451 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.907783 kubelet[3512]: E0514 00:00:40.905728 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.907783 kubelet[3512]: W0514 00:00:40.905742 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.907783 kubelet[3512]: E0514 00:00:40.905756 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.907783 kubelet[3512]: E0514 00:00:40.905802 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.941895 containerd[1922]: time="2025-05-14T00:00:40.941720897Z" level=info msg="connecting to shim 7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209" address="unix:///run/containerd/s/7c8d14da7ed0aa2c3167686446255127bda8397fb5ceb01c996e42ef9c30073c" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:40.943171 kubelet[3512]: E0514 00:00:40.943076 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:40.943171 kubelet[3512]: W0514 00:00:40.943097 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:40.943171 kubelet[3512]: E0514 00:00:40.943122 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:40.995175 systemd[1]: Started cri-containerd-7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209.scope - libcontainer container 7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209. May 14 00:00:41.100498 containerd[1922]: time="2025-05-14T00:00:41.099996333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7557475897-m4mv4,Uid:7a86b7eb-b396-40d5-bb18-d3582f4d3989,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa62d15dca6170435719bcf9ba5723d78f3f2669f8ea07fbc6927efa52207973\"" May 14 00:00:41.116021 containerd[1922]: time="2025-05-14T00:00:41.115348688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfwm7,Uid:f43f496d-2aab-4f69-a9c7-34444319a368,Namespace:calico-system,Attempt:0,} returns sandbox id \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\"" May 14 00:00:41.119904 containerd[1922]: time="2025-05-14T00:00:41.119861909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 00:00:42.930305 kubelet[3512]: E0514 00:00:42.929397 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcf7w" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" May 14 00:00:43.313620 containerd[1922]: time="2025-05-14T00:00:43.313488095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:43.314944 containerd[1922]: time="2025-05-14T00:00:43.314740151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 00:00:43.316054 containerd[1922]: time="2025-05-14T00:00:43.316025253Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:43.318809 containerd[1922]: time="2025-05-14T00:00:43.318749544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:43.319572 containerd[1922]: time="2025-05-14T00:00:43.319534532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.199508382s" May 14 00:00:43.319670 containerd[1922]: time="2025-05-14T00:00:43.319575959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 00:00:43.321171 containerd[1922]: time="2025-05-14T00:00:43.320921346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 00:00:43.340458 containerd[1922]: time="2025-05-14T00:00:43.339494477Z" level=info msg="CreateContainer within sandbox \"fa62d15dca6170435719bcf9ba5723d78f3f2669f8ea07fbc6927efa52207973\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 00:00:43.351024 containerd[1922]: time="2025-05-14T00:00:43.350064712Z" level=info msg="Container 2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:43.355025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521755072.mount: Deactivated successfully. May 14 00:00:43.364271 containerd[1922]: time="2025-05-14T00:00:43.362230345Z" level=info msg="CreateContainer within sandbox \"fa62d15dca6170435719bcf9ba5723d78f3f2669f8ea07fbc6927efa52207973\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2\"" May 14 00:00:43.366693 containerd[1922]: time="2025-05-14T00:00:43.364936838Z" level=info msg="StartContainer for \"2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2\"" May 14 00:00:43.366693 containerd[1922]: time="2025-05-14T00:00:43.366075170Z" level=info msg="connecting to shim 2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2" address="unix:///run/containerd/s/9c542e4c3532d6bd66f1d2fdbf8c4bca791fa79b7baa95719939d19703d805ab" protocol=ttrpc version=3 May 14 00:00:43.391041 systemd[1]: Started cri-containerd-2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2.scope - libcontainer container 2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2. May 14 00:00:43.455603 containerd[1922]: time="2025-05-14T00:00:43.455537130Z" level=info msg="StartContainer for \"2cde5a1b3b0da36ea35e6838bdfcdbdc1eeaf9b244780bd93c36d0a3f411e7b2\" returns successfully" May 14 00:00:44.091975 kubelet[3512]: I0514 00:00:44.089643 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7557475897-m4mv4" podStartSLOduration=1.8862320380000002 podStartE2EDuration="4.089604401s" podCreationTimestamp="2025-05-14 00:00:40 +0000 UTC" firstStartedPulling="2025-05-14 00:00:41.11734185 +0000 UTC m=+22.366119409" lastFinishedPulling="2025-05-14 00:00:43.320714204 +0000 UTC m=+24.569491772" observedRunningTime="2025-05-14 00:00:44.089046021 +0000 UTC m=+25.337823580" watchObservedRunningTime="2025-05-14 00:00:44.089604401 +0000 UTC m=+25.338381977" May 14 00:00:44.100319 kubelet[3512]: E0514 00:00:44.100281 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.100319 kubelet[3512]: W0514 00:00:44.100310 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.100559 kubelet[3512]: E0514 00:00:44.100336 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.100658 kubelet[3512]: E0514 00:00:44.100636 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.100658 kubelet[3512]: W0514 00:00:44.100652 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.100801 kubelet[3512]: E0514 00:00:44.100670 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.100981 kubelet[3512]: E0514 00:00:44.100959 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.100981 kubelet[3512]: W0514 00:00:44.100975 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.101111 kubelet[3512]: E0514 00:00:44.101001 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.101292 kubelet[3512]: E0514 00:00:44.101274 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.101292 kubelet[3512]: W0514 00:00:44.101288 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.101448 kubelet[3512]: E0514 00:00:44.101302 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.101562 kubelet[3512]: E0514 00:00:44.101545 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.101562 kubelet[3512]: W0514 00:00:44.101558 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.101716 kubelet[3512]: E0514 00:00:44.101570 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.102127 kubelet[3512]: E0514 00:00:44.102106 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.102127 kubelet[3512]: W0514 00:00:44.102121 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.102290 kubelet[3512]: E0514 00:00:44.102135 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.102421 kubelet[3512]: E0514 00:00:44.102405 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.102421 kubelet[3512]: W0514 00:00:44.102418 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.102534 kubelet[3512]: E0514 00:00:44.102432 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.102663 kubelet[3512]: E0514 00:00:44.102648 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.102663 kubelet[3512]: W0514 00:00:44.102660 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.102797 kubelet[3512]: E0514 00:00:44.102673 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.102918 kubelet[3512]: E0514 00:00:44.102903 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.102975 kubelet[3512]: W0514 00:00:44.102917 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.102975 kubelet[3512]: E0514 00:00:44.102929 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.103158 kubelet[3512]: E0514 00:00:44.103142 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.103158 kubelet[3512]: W0514 00:00:44.103154 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.103362 kubelet[3512]: E0514 00:00:44.103166 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.103431 kubelet[3512]: E0514 00:00:44.103369 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.103431 kubelet[3512]: W0514 00:00:44.103379 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.103431 kubelet[3512]: E0514 00:00:44.103391 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.103688 kubelet[3512]: E0514 00:00:44.103670 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.103688 kubelet[3512]: W0514 00:00:44.103684 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.103839 kubelet[3512]: E0514 00:00:44.103697 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.103955 kubelet[3512]: E0514 00:00:44.103939 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.103955 kubelet[3512]: W0514 00:00:44.103953 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.104118 kubelet[3512]: E0514 00:00:44.103965 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.104345 kubelet[3512]: E0514 00:00:44.104328 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.104345 kubelet[3512]: W0514 00:00:44.104342 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.104453 kubelet[3512]: E0514 00:00:44.104356 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.104591 kubelet[3512]: E0514 00:00:44.104573 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.104591 kubelet[3512]: W0514 00:00:44.104587 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.104692 kubelet[3512]: E0514 00:00:44.104601 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.124947 kubelet[3512]: E0514 00:00:44.124902 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.125155 kubelet[3512]: W0514 00:00:44.125023 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.125155 kubelet[3512]: E0514 00:00:44.125047 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.125789 kubelet[3512]: E0514 00:00:44.125679 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.125789 kubelet[3512]: W0514 00:00:44.125692 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.125789 kubelet[3512]: E0514 00:00:44.125730 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.126553 kubelet[3512]: E0514 00:00:44.126363 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.126553 kubelet[3512]: W0514 00:00:44.126373 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.126712 kubelet[3512]: E0514 00:00:44.126633 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.127032 kubelet[3512]: E0514 00:00:44.126897 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.127032 kubelet[3512]: W0514 00:00:44.126907 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.127032 kubelet[3512]: E0514 00:00:44.126919 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.127395 kubelet[3512]: E0514 00:00:44.127297 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.127395 kubelet[3512]: W0514 00:00:44.127322 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.127395 kubelet[3512]: E0514 00:00:44.127346 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.127713 kubelet[3512]: E0514 00:00:44.127632 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.127713 kubelet[3512]: W0514 00:00:44.127641 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.127713 kubelet[3512]: E0514 00:00:44.127661 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.128123 kubelet[3512]: E0514 00:00:44.127994 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.128123 kubelet[3512]: W0514 00:00:44.128004 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.128123 kubelet[3512]: E0514 00:00:44.128020 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.128616 kubelet[3512]: E0514 00:00:44.128574 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.128616 kubelet[3512]: W0514 00:00:44.128583 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.128986 kubelet[3512]: E0514 00:00:44.128974 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.129682 kubelet[3512]: E0514 00:00:44.129560 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.129682 kubelet[3512]: W0514 00:00:44.129571 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.130269 kubelet[3512]: E0514 00:00:44.129793 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.130796 kubelet[3512]: E0514 00:00:44.130727 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.130796 kubelet[3512]: W0514 00:00:44.130745 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.130954 kubelet[3512]: E0514 00:00:44.130942 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.130954 kubelet[3512]: W0514 00:00:44.130951 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.131419 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.131445 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.131882 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.132870 kubelet[3512]: W0514 00:00:44.131891 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.131905 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.132266 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.132870 kubelet[3512]: W0514 00:00:44.132274 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.132288 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.132870 kubelet[3512]: E0514 00:00:44.132507 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.132870 kubelet[3512]: W0514 00:00:44.132514 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.133265 kubelet[3512]: E0514 00:00:44.132547 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.133265 kubelet[3512]: E0514 00:00:44.132736 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.133265 kubelet[3512]: W0514 00:00:44.132754 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.133265 kubelet[3512]: E0514 00:00:44.132802 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.133265 kubelet[3512]: E0514 00:00:44.133028 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.133265 kubelet[3512]: W0514 00:00:44.133036 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.133265 kubelet[3512]: E0514 00:00:44.133070 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.133700 kubelet[3512]: E0514 00:00:44.133464 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.133700 kubelet[3512]: W0514 00:00:44.133479 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.133700 kubelet[3512]: E0514 00:00:44.133488 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.134480 kubelet[3512]: E0514 00:00:44.133722 3512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:44.134480 kubelet[3512]: W0514 00:00:44.133729 3512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:44.134480 kubelet[3512]: E0514 00:00:44.133738 3512 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:44.599780 containerd[1922]: time="2025-05-14T00:00:44.599721296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:44.601588 containerd[1922]: time="2025-05-14T00:00:44.601419358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 00:00:44.603832 containerd[1922]: time="2025-05-14T00:00:44.603789854Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:44.607526 containerd[1922]: time="2025-05-14T00:00:44.606836468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:44.607526 containerd[1922]: time="2025-05-14T00:00:44.607401828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.286446212s" May 14 00:00:44.607526 containerd[1922]: time="2025-05-14T00:00:44.607432182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 00:00:44.610937 containerd[1922]: time="2025-05-14T00:00:44.610895835Z" level=info msg="CreateContainer within sandbox \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 00:00:44.627789 containerd[1922]: time="2025-05-14T00:00:44.626254819Z" level=info msg="Container 8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:44.663065 containerd[1922]: time="2025-05-14T00:00:44.663014826Z" level=info msg="CreateContainer within sandbox \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\"" May 14 00:00:44.664813 containerd[1922]: time="2025-05-14T00:00:44.663911409Z" level=info msg="StartContainer for \"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\"" May 14 00:00:44.667561 containerd[1922]: time="2025-05-14T00:00:44.667270800Z" level=info msg="connecting to shim 8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef" address="unix:///run/containerd/s/7c8d14da7ed0aa2c3167686446255127bda8397fb5ceb01c996e42ef9c30073c" protocol=ttrpc version=3 May 14 00:00:44.707277 systemd[1]: Started cri-containerd-8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef.scope - libcontainer container 8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef. May 14 00:00:44.790202 containerd[1922]: time="2025-05-14T00:00:44.789552089Z" level=info msg="StartContainer for \"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\" returns successfully" May 14 00:00:44.800656 systemd[1]: cri-containerd-8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef.scope: Deactivated successfully. May 14 00:00:44.824554 containerd[1922]: time="2025-05-14T00:00:44.824500757Z" level=info msg="received exit event container_id:\"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\" id:\"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\" pid:4158 exited_at:{seconds:1747180844 nanos:803036916}" May 14 00:00:44.893549 containerd[1922]: time="2025-05-14T00:00:44.893498743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\" id:\"8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef\" pid:4158 exited_at:{seconds:1747180844 nanos:803036916}" May 14 00:00:44.893631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8360cd9574c282b7f2697130627923998a0d55cdbd237bbf9f25668c776e52ef-rootfs.mount: Deactivated successfully. May 14 00:00:44.924235 kubelet[3512]: E0514 00:00:44.924177 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcf7w" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" May 14 00:00:45.070297 containerd[1922]: time="2025-05-14T00:00:45.068234654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 00:00:46.923577 kubelet[3512]: E0514 00:00:46.923528 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcf7w" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" May 14 00:00:48.926151 kubelet[3512]: E0514 00:00:48.926102 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcf7w" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" May 14 00:00:48.938035 containerd[1922]: time="2025-05-14T00:00:48.936033681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:48.940638 containerd[1922]: time="2025-05-14T00:00:48.940375604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 00:00:48.943791 containerd[1922]: time="2025-05-14T00:00:48.943723678Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:48.950715 containerd[1922]: time="2025-05-14T00:00:48.949182779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:48.950715 containerd[1922]: time="2025-05-14T00:00:48.950384417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.882090347s" May 14 00:00:48.950715 containerd[1922]: time="2025-05-14T00:00:48.950424382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 00:00:48.954679 containerd[1922]: time="2025-05-14T00:00:48.953497612Z" level=info msg="CreateContainer within sandbox \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:00:48.973981 containerd[1922]: time="2025-05-14T00:00:48.962913626Z" level=info msg="Container 6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:48.985406 containerd[1922]: time="2025-05-14T00:00:48.984796404Z" level=info msg="CreateContainer within sandbox \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\"" May 14 00:00:48.987553 containerd[1922]: time="2025-05-14T00:00:48.987509490Z" level=info msg="StartContainer for \"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\"" May 14 00:00:48.990045 containerd[1922]: time="2025-05-14T00:00:48.990009976Z" level=info msg="connecting to shim 6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad" address="unix:///run/containerd/s/7c8d14da7ed0aa2c3167686446255127bda8397fb5ceb01c996e42ef9c30073c" protocol=ttrpc version=3 May 14 00:00:49.071204 systemd[1]: Started cri-containerd-6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad.scope - libcontainer container 6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad. May 14 00:00:49.187523 containerd[1922]: time="2025-05-14T00:00:49.186215096Z" level=info msg="StartContainer for \"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\" returns successfully" May 14 00:00:50.215071 systemd[1]: cri-containerd-6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad.scope: Deactivated successfully. May 14 00:00:50.215424 systemd[1]: cri-containerd-6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad.scope: Consumed 576ms CPU time, 150.5M memory peak, 2.6M read from disk, 154M written to disk. May 14 00:00:50.219548 containerd[1922]: time="2025-05-14T00:00:50.219162608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\" id:\"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\" pid:4214 exited_at:{seconds:1747180850 nanos:218469496}" May 14 00:00:50.219548 containerd[1922]: time="2025-05-14T00:00:50.219326399Z" level=info msg="received exit event container_id:\"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\" id:\"6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad\" pid:4214 exited_at:{seconds:1747180850 nanos:218469496}" May 14 00:00:50.264218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ec8579cbbe464526f1d2d94f6c1a7943fe39346a0842bbc2659806d987229ad-rootfs.mount: Deactivated successfully. May 14 00:00:50.336569 kubelet[3512]: I0514 00:00:50.336539 3512 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:00:50.422574 kubelet[3512]: I0514 00:00:50.421301 3512 topology_manager.go:215] "Topology Admit Handler" podUID="3b07b1c0-188b-40ea-92e9-aa86b8abc988" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v7d9h" May 14 00:00:50.429288 kubelet[3512]: I0514 00:00:50.429258 3512 topology_manager.go:215] "Topology Admit Handler" podUID="71157674-7fe2-4e8f-bb49-022129d62a20" podNamespace="calico-system" podName="calico-kube-controllers-7f6c7cb986-p92jl" May 14 00:00:50.430699 systemd[1]: Created slice kubepods-burstable-pod3b07b1c0_188b_40ea_92e9_aa86b8abc988.slice - libcontainer container kubepods-burstable-pod3b07b1c0_188b_40ea_92e9_aa86b8abc988.slice. May 14 00:00:50.432486 kubelet[3512]: I0514 00:00:50.431936 3512 topology_manager.go:215] "Topology Admit Handler" podUID="da0fc96c-f65b-4c3d-8a2c-ddb48b838498" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bh8zd" May 14 00:00:50.435747 kubelet[3512]: I0514 00:00:50.435715 3512 topology_manager.go:215] "Topology Admit Handler" podUID="647f705a-6d3d-4056-a3ed-afedaf4ccff2" podNamespace="calico-apiserver" podName="calico-apiserver-598998cb9f-blz4x" May 14 00:00:50.436154 kubelet[3512]: I0514 00:00:50.436130 3512 topology_manager.go:215] "Topology Admit Handler" podUID="428635f4-931f-4c1e-93c7-3e4ebe5febeb" podNamespace="calico-apiserver" podName="calico-apiserver-598998cb9f-9vhsk" May 14 00:00:50.445071 systemd[1]: Created slice kubepods-burstable-podda0fc96c_f65b_4c3d_8a2c_ddb48b838498.slice - libcontainer container kubepods-burstable-podda0fc96c_f65b_4c3d_8a2c_ddb48b838498.slice. May 14 00:00:50.455424 systemd[1]: Created slice kubepods-besteffort-pod71157674_7fe2_4e8f_bb49_022129d62a20.slice - libcontainer container kubepods-besteffort-pod71157674_7fe2_4e8f_bb49_022129d62a20.slice. May 14 00:00:50.467338 systemd[1]: Created slice kubepods-besteffort-pod647f705a_6d3d_4056_a3ed_afedaf4ccff2.slice - libcontainer container kubepods-besteffort-pod647f705a_6d3d_4056_a3ed_afedaf4ccff2.slice. May 14 00:00:50.477235 systemd[1]: Created slice kubepods-besteffort-pod428635f4_931f_4c1e_93c7_3e4ebe5febeb.slice - libcontainer container kubepods-besteffort-pod428635f4_931f_4c1e_93c7_3e4ebe5febeb.slice. May 14 00:00:50.492355 kubelet[3512]: I0514 00:00:50.492312 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b07b1c0-188b-40ea-92e9-aa86b8abc988-config-volume\") pod \"coredns-7db6d8ff4d-v7d9h\" (UID: \"3b07b1c0-188b-40ea-92e9-aa86b8abc988\") " pod="kube-system/coredns-7db6d8ff4d-v7d9h" May 14 00:00:50.492594 kubelet[3512]: I0514 00:00:50.492565 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da0fc96c-f65b-4c3d-8a2c-ddb48b838498-config-volume\") pod \"coredns-7db6d8ff4d-bh8zd\" (UID: \"da0fc96c-f65b-4c3d-8a2c-ddb48b838498\") " pod="kube-system/coredns-7db6d8ff4d-bh8zd" May 14 00:00:50.492694 kubelet[3512]: I0514 00:00:50.492682 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4kwm\" (UniqueName: \"kubernetes.io/projected/da0fc96c-f65b-4c3d-8a2c-ddb48b838498-kube-api-access-f4kwm\") pod \"coredns-7db6d8ff4d-bh8zd\" (UID: \"da0fc96c-f65b-4c3d-8a2c-ddb48b838498\") " pod="kube-system/coredns-7db6d8ff4d-bh8zd" May 14 00:00:50.492843 kubelet[3512]: I0514 00:00:50.492802 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsctd\" (UniqueName: \"kubernetes.io/projected/3b07b1c0-188b-40ea-92e9-aa86b8abc988-kube-api-access-zsctd\") pod \"coredns-7db6d8ff4d-v7d9h\" (UID: \"3b07b1c0-188b-40ea-92e9-aa86b8abc988\") " pod="kube-system/coredns-7db6d8ff4d-v7d9h" May 14 00:00:50.492955 kubelet[3512]: I0514 00:00:50.492928 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gvw\" (UniqueName: \"kubernetes.io/projected/71157674-7fe2-4e8f-bb49-022129d62a20-kube-api-access-t7gvw\") pod \"calico-kube-controllers-7f6c7cb986-p92jl\" (UID: \"71157674-7fe2-4e8f-bb49-022129d62a20\") " pod="calico-system/calico-kube-controllers-7f6c7cb986-p92jl" May 14 00:00:50.493055 kubelet[3512]: I0514 00:00:50.493044 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/428635f4-931f-4c1e-93c7-3e4ebe5febeb-calico-apiserver-certs\") pod \"calico-apiserver-598998cb9f-9vhsk\" (UID: \"428635f4-931f-4c1e-93c7-3e4ebe5febeb\") " pod="calico-apiserver/calico-apiserver-598998cb9f-9vhsk" May 14 00:00:50.493608 kubelet[3512]: I0514 00:00:50.493139 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71157674-7fe2-4e8f-bb49-022129d62a20-tigera-ca-bundle\") pod \"calico-kube-controllers-7f6c7cb986-p92jl\" (UID: \"71157674-7fe2-4e8f-bb49-022129d62a20\") " pod="calico-system/calico-kube-controllers-7f6c7cb986-p92jl" May 14 00:00:50.493608 kubelet[3512]: I0514 00:00:50.493161 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxdk\" (UniqueName: \"kubernetes.io/projected/428635f4-931f-4c1e-93c7-3e4ebe5febeb-kube-api-access-xcxdk\") pod \"calico-apiserver-598998cb9f-9vhsk\" (UID: \"428635f4-931f-4c1e-93c7-3e4ebe5febeb\") " pod="calico-apiserver/calico-apiserver-598998cb9f-9vhsk" May 14 00:00:50.493608 kubelet[3512]: I0514 00:00:50.493203 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ll8\" (UniqueName: \"kubernetes.io/projected/647f705a-6d3d-4056-a3ed-afedaf4ccff2-kube-api-access-q4ll8\") pod \"calico-apiserver-598998cb9f-blz4x\" (UID: \"647f705a-6d3d-4056-a3ed-afedaf4ccff2\") " pod="calico-apiserver/calico-apiserver-598998cb9f-blz4x" May 14 00:00:50.493608 kubelet[3512]: I0514 00:00:50.493224 3512 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/647f705a-6d3d-4056-a3ed-afedaf4ccff2-calico-apiserver-certs\") pod \"calico-apiserver-598998cb9f-blz4x\" (UID: \"647f705a-6d3d-4056-a3ed-afedaf4ccff2\") " pod="calico-apiserver/calico-apiserver-598998cb9f-blz4x" May 14 00:00:50.742020 containerd[1922]: time="2025-05-14T00:00:50.741898925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7d9h,Uid:3b07b1c0-188b-40ea-92e9-aa86b8abc988,Namespace:kube-system,Attempt:0,}" May 14 00:00:50.752628 containerd[1922]: time="2025-05-14T00:00:50.752556605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8zd,Uid:da0fc96c-f65b-4c3d-8a2c-ddb48b838498,Namespace:kube-system,Attempt:0,}" May 14 00:00:50.791200 containerd[1922]: time="2025-05-14T00:00:50.790957286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-9vhsk,Uid:428635f4-931f-4c1e-93c7-3e4ebe5febeb,Namespace:calico-apiserver,Attempt:0,}" May 14 00:00:50.792171 containerd[1922]: time="2025-05-14T00:00:50.792133292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6c7cb986-p92jl,Uid:71157674-7fe2-4e8f-bb49-022129d62a20,Namespace:calico-system,Attempt:0,}" May 14 00:00:50.792570 containerd[1922]: time="2025-05-14T00:00:50.792426340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-blz4x,Uid:647f705a-6d3d-4056-a3ed-afedaf4ccff2,Namespace:calico-apiserver,Attempt:0,}" May 14 00:00:50.933149 systemd[1]: Created slice kubepods-besteffort-pod2a30ea9b_c110_4f18_9efa_bf49f1486efc.slice - libcontainer container kubepods-besteffort-pod2a30ea9b_c110_4f18_9efa_bf49f1486efc.slice. May 14 00:00:50.937175 containerd[1922]: time="2025-05-14T00:00:50.937131305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcf7w,Uid:2a30ea9b-c110-4f18-9efa-bf49f1486efc,Namespace:calico-system,Attempt:0,}" May 14 00:00:51.196825 containerd[1922]: time="2025-05-14T00:00:51.196709566Z" level=error msg="Failed to destroy network for sandbox \"d853edc65eaafed0b0d43f7bedf8179e64d051c08132f04864342fda9f7fdf9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.203692 containerd[1922]: time="2025-05-14T00:00:51.202925639Z" level=error msg="Failed to destroy network for sandbox \"8fd6d176bc35f2fb3e1d5ae675b45875c38891524f29f93b59b5cfe24fa5d648\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.267607 containerd[1922]: time="2025-05-14T00:00:51.205495596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6c7cb986-p92jl,Uid:71157674-7fe2-4e8f-bb49-022129d62a20,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd6d176bc35f2fb3e1d5ae675b45875c38891524f29f93b59b5cfe24fa5d648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.268235 containerd[1922]: time="2025-05-14T00:00:51.221273310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 00:00:51.274100 containerd[1922]: time="2025-05-14T00:00:51.222960748Z" level=error msg="Failed to destroy network for sandbox \"e4aa37aac55697f13d8aae7433f2737725ce069a25d0f3a933723c2b8ae785be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.278250 containerd[1922]: time="2025-05-14T00:00:51.275119819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7d9h,Uid:3b07b1c0-188b-40ea-92e9-aa86b8abc988,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aa37aac55697f13d8aae7433f2737725ce069a25d0f3a933723c2b8ae785be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.278250 containerd[1922]: time="2025-05-14T00:00:51.228679806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8zd,Uid:da0fc96c-f65b-4c3d-8a2c-ddb48b838498,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d853edc65eaafed0b0d43f7bedf8179e64d051c08132f04864342fda9f7fdf9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.278250 containerd[1922]: time="2025-05-14T00:00:51.246042805Z" level=error msg="Failed to destroy network for sandbox \"8e8fa02bb1f9c217e863ad096db68400be401bb0a7402b0154e225d944c9c58a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.277951 systemd[1]: run-netns-cni\x2d63153411\x2d1977\x2d960e\x2d71fa\x2d9043ce1004a4.mount: Deactivated successfully. May 14 00:00:51.278871 containerd[1922]: time="2025-05-14T00:00:51.278385756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcf7w,Uid:2a30ea9b-c110-4f18-9efa-bf49f1486efc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8fa02bb1f9c217e863ad096db68400be401bb0a7402b0154e225d944c9c58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.278871 containerd[1922]: time="2025-05-14T00:00:51.246858863Z" level=error msg="Failed to destroy network for sandbox \"490b9c6887d2af19ce5c87daab52a420c8cb9b761a8a23353bc1b1a23ac6e524\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.284367 containerd[1922]: time="2025-05-14T00:00:51.280517561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-blz4x,Uid:647f705a-6d3d-4056-a3ed-afedaf4ccff2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"490b9c6887d2af19ce5c87daab52a420c8cb9b761a8a23353bc1b1a23ac6e524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.284367 containerd[1922]: time="2025-05-14T00:00:51.263031494Z" level=error msg="Failed to destroy network for sandbox \"5c6d7345b38dd16a0db8972fbd1af7deaf2db5ffb786ef32731693f3c5356f97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.284367 containerd[1922]: time="2025-05-14T00:00:51.282399066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-9vhsk,Uid:428635f4-931f-4c1e-93c7-3e4ebe5febeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6d7345b38dd16a0db8972fbd1af7deaf2db5ffb786ef32731693f3c5356f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.284659 kubelet[3512]: E0514 00:00:51.283871 3512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd6d176bc35f2fb3e1d5ae675b45875c38891524f29f93b59b5cfe24fa5d648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.284659 kubelet[3512]: E0514 00:00:51.283966 3512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd6d176bc35f2fb3e1d5ae675b45875c38891524f29f93b59b5cfe24fa5d648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6c7cb986-p92jl" May 14 00:00:51.284659 kubelet[3512]: E0514 00:00:51.283995 3512 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd6d176bc35f2fb3e1d5ae675b45875c38891524f29f93b59b5cfe24fa5d648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6c7cb986-p92jl" May 14 00:00:51.284823 kubelet[3512]: E0514 00:00:51.284054 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f6c7cb986-p92jl_calico-system(71157674-7fe2-4e8f-bb49-022129d62a20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f6c7cb986-p92jl_calico-system(71157674-7fe2-4e8f-bb49-022129d62a20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fd6d176bc35f2fb3e1d5ae675b45875c38891524f29f93b59b5cfe24fa5d648\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f6c7cb986-p92jl" podUID="71157674-7fe2-4e8f-bb49-022129d62a20" May 14 00:00:51.286914 kubelet[3512]: E0514 00:00:51.284890 3512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6d7345b38dd16a0db8972fbd1af7deaf2db5ffb786ef32731693f3c5356f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.286914 kubelet[3512]: E0514 00:00:51.284947 3512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6d7345b38dd16a0db8972fbd1af7deaf2db5ffb786ef32731693f3c5356f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598998cb9f-9vhsk" May 14 00:00:51.286914 kubelet[3512]: E0514 00:00:51.284973 3512 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6d7345b38dd16a0db8972fbd1af7deaf2db5ffb786ef32731693f3c5356f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598998cb9f-9vhsk" May 14 00:00:51.286305 systemd[1]: run-netns-cni\x2d6fc09edf\x2daf75\x2d42f4\x2d0a77\x2d27dbdd03ef36.mount: Deactivated successfully. May 14 00:00:51.287241 kubelet[3512]: E0514 00:00:51.285021 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598998cb9f-9vhsk_calico-apiserver(428635f4-931f-4c1e-93c7-3e4ebe5febeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598998cb9f-9vhsk_calico-apiserver(428635f4-931f-4c1e-93c7-3e4ebe5febeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c6d7345b38dd16a0db8972fbd1af7deaf2db5ffb786ef32731693f3c5356f97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598998cb9f-9vhsk" podUID="428635f4-931f-4c1e-93c7-3e4ebe5febeb" May 14 00:00:51.287241 kubelet[3512]: E0514 00:00:51.285077 3512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aa37aac55697f13d8aae7433f2737725ce069a25d0f3a933723c2b8ae785be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.287241 kubelet[3512]: E0514 00:00:51.285107 3512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aa37aac55697f13d8aae7433f2737725ce069a25d0f3a933723c2b8ae785be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v7d9h" May 14 00:00:51.286451 systemd[1]: run-netns-cni\x2d5a703640\x2dde81\x2dd229\x2d2efc\x2d18b6a99232bf.mount: Deactivated successfully. May 14 00:00:51.287533 kubelet[3512]: E0514 00:00:51.285130 3512 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aa37aac55697f13d8aae7433f2737725ce069a25d0f3a933723c2b8ae785be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v7d9h" May 14 00:00:51.287533 kubelet[3512]: E0514 00:00:51.285163 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-v7d9h_kube-system(3b07b1c0-188b-40ea-92e9-aa86b8abc988)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-v7d9h_kube-system(3b07b1c0-188b-40ea-92e9-aa86b8abc988)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4aa37aac55697f13d8aae7433f2737725ce069a25d0f3a933723c2b8ae785be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v7d9h" podUID="3b07b1c0-188b-40ea-92e9-aa86b8abc988" May 14 00:00:51.287533 kubelet[3512]: E0514 00:00:51.285198 3512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d853edc65eaafed0b0d43f7bedf8179e64d051c08132f04864342fda9f7fdf9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.286536 systemd[1]: run-netns-cni\x2d340b07f8\x2dbdd0\x2dd953\x2d536a\x2d3018b4cb0774.mount: Deactivated successfully. May 14 00:00:51.290882 kubelet[3512]: E0514 00:00:51.285223 3512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d853edc65eaafed0b0d43f7bedf8179e64d051c08132f04864342fda9f7fdf9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bh8zd" May 14 00:00:51.290882 kubelet[3512]: E0514 00:00:51.285241 3512 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d853edc65eaafed0b0d43f7bedf8179e64d051c08132f04864342fda9f7fdf9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bh8zd" May 14 00:00:51.290882 kubelet[3512]: E0514 00:00:51.285270 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bh8zd_kube-system(da0fc96c-f65b-4c3d-8a2c-ddb48b838498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bh8zd_kube-system(da0fc96c-f65b-4c3d-8a2c-ddb48b838498)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d853edc65eaafed0b0d43f7bedf8179e64d051c08132f04864342fda9f7fdf9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bh8zd" podUID="da0fc96c-f65b-4c3d-8a2c-ddb48b838498" May 14 00:00:51.291135 kubelet[3512]: E0514 00:00:51.285304 3512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8fa02bb1f9c217e863ad096db68400be401bb0a7402b0154e225d944c9c58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.291135 kubelet[3512]: E0514 00:00:51.285332 3512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8fa02bb1f9c217e863ad096db68400be401bb0a7402b0154e225d944c9c58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:51.291135 kubelet[3512]: E0514 00:00:51.285353 3512 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8fa02bb1f9c217e863ad096db68400be401bb0a7402b0154e225d944c9c58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcf7w" May 14 00:00:51.291395 kubelet[3512]: E0514 00:00:51.285387 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kcf7w_calico-system(2a30ea9b-c110-4f18-9efa-bf49f1486efc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kcf7w_calico-system(2a30ea9b-c110-4f18-9efa-bf49f1486efc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e8fa02bb1f9c217e863ad096db68400be401bb0a7402b0154e225d944c9c58a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kcf7w" podUID="2a30ea9b-c110-4f18-9efa-bf49f1486efc" May 14 00:00:51.291395 kubelet[3512]: E0514 00:00:51.285425 3512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490b9c6887d2af19ce5c87daab52a420c8cb9b761a8a23353bc1b1a23ac6e524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:00:51.291395 kubelet[3512]: E0514 00:00:51.285462 3512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490b9c6887d2af19ce5c87daab52a420c8cb9b761a8a23353bc1b1a23ac6e524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598998cb9f-blz4x" May 14 00:00:51.293697 kubelet[3512]: E0514 00:00:51.285482 3512 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490b9c6887d2af19ce5c87daab52a420c8cb9b761a8a23353bc1b1a23ac6e524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598998cb9f-blz4x" May 14 00:00:51.293697 kubelet[3512]: E0514 00:00:51.285514 3512 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598998cb9f-blz4x_calico-apiserver(647f705a-6d3d-4056-a3ed-afedaf4ccff2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598998cb9f-blz4x_calico-apiserver(647f705a-6d3d-4056-a3ed-afedaf4ccff2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"490b9c6887d2af19ce5c87daab52a420c8cb9b761a8a23353bc1b1a23ac6e524\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598998cb9f-blz4x" podUID="647f705a-6d3d-4056-a3ed-afedaf4ccff2" May 14 00:00:54.713571 systemd[1]: Started sshd@9-172.31.19.86:22-147.75.109.163:48638.service - OpenSSH per-connection server daemon (147.75.109.163:48638). May 14 00:00:54.970171 sshd[4441]: Accepted publickey for core from 147.75.109.163 port 48638 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:00:54.975302 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:55.003225 systemd-logind[1897]: New session 10 of user core. May 14 00:00:55.008131 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:00:55.387461 sshd[4448]: Connection closed by 147.75.109.163 port 48638 May 14 00:00:55.386177 sshd-session[4441]: pam_unix(sshd:session): session closed for user core May 14 00:00:55.393388 systemd[1]: sshd@9-172.31.19.86:22-147.75.109.163:48638.service: Deactivated successfully. May 14 00:00:55.397980 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:00:55.400437 systemd-logind[1897]: Session 10 logged out. Waiting for processes to exit. May 14 00:00:55.402410 systemd-logind[1897]: Removed session 10. May 14 00:00:57.501597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159439105.mount: Deactivated successfully. May 14 00:00:57.764862 containerd[1922]: time="2025-05-14T00:00:57.727080277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 00:00:57.764862 containerd[1922]: time="2025-05-14T00:00:57.763531178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:57.778344 containerd[1922]: time="2025-05-14T00:00:57.776920593Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:57.778344 containerd[1922]: time="2025-05-14T00:00:57.777660365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:57.779897 containerd[1922]: time="2025-05-14T00:00:57.779845923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.509040643s" May 14 00:00:57.786930 containerd[1922]: time="2025-05-14T00:00:57.786875208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 00:00:57.947940 containerd[1922]: time="2025-05-14T00:00:57.947885310Z" level=info msg="CreateContainer within sandbox \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 00:00:58.091135 containerd[1922]: time="2025-05-14T00:00:58.090958658Z" level=info msg="Container 30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:58.097670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634223450.mount: Deactivated successfully. May 14 00:00:58.253319 containerd[1922]: time="2025-05-14T00:00:58.252990651Z" level=info msg="CreateContainer within sandbox \"7528467c61632f19f8127c62b35687acc9c552ca33e0d30f42ff62bf77a59209\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\"" May 14 00:00:58.292721 containerd[1922]: time="2025-05-14T00:00:58.292673483Z" level=info msg="StartContainer for \"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\"" May 14 00:00:58.307284 containerd[1922]: time="2025-05-14T00:00:58.307246502Z" level=info msg="connecting to shim 30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68" address="unix:///run/containerd/s/7c8d14da7ed0aa2c3167686446255127bda8397fb5ceb01c996e42ef9c30073c" protocol=ttrpc version=3 May 14 00:00:58.430265 systemd[1]: Started cri-containerd-30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68.scope - libcontainer container 30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68. May 14 00:00:58.506757 containerd[1922]: time="2025-05-14T00:00:58.505367706Z" level=info msg="StartContainer for \"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\" returns successfully" May 14 00:00:58.709710 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 00:00:58.711691 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 00:01:00.297627 kubelet[3512]: I0514 00:01:00.297312 3512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:00.425314 systemd[1]: Started sshd@10-172.31.19.86:22-147.75.109.163:34120.service - OpenSSH per-connection server daemon (147.75.109.163:34120). May 14 00:01:00.700740 sshd[4577]: Accepted publickey for core from 147.75.109.163 port 34120 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:00.704575 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:00.718842 systemd-logind[1897]: New session 11 of user core. May 14 00:01:00.724251 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:01:00.913885 kernel: bpftool[4657]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 00:01:01.274405 sshd[4616]: Connection closed by 147.75.109.163 port 34120 May 14 00:01:01.278732 sshd-session[4577]: pam_unix(sshd:session): session closed for user core May 14 00:01:01.293591 systemd-logind[1897]: Session 11 logged out. Waiting for processes to exit. May 14 00:01:01.294285 systemd[1]: sshd@10-172.31.19.86:22-147.75.109.163:34120.service: Deactivated successfully. May 14 00:01:01.298591 kubelet[3512]: I0514 00:01:01.297181 3512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:01.305648 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:01:01.309605 systemd-logind[1897]: Removed session 11. May 14 00:01:01.417703 (udev-worker)[4498]: Network interface NamePolicy= disabled on kernel command line. May 14 00:01:01.423520 systemd-networkd[1826]: vxlan.calico: Link UP May 14 00:01:01.423531 systemd-networkd[1826]: vxlan.calico: Gained carrier May 14 00:01:01.470752 (udev-worker)[4717]: Network interface NamePolicy= disabled on kernel command line. May 14 00:01:01.602599 containerd[1922]: time="2025-05-14T00:01:01.600744370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\" id:\"df3f35dd4a5947ea081f0c8223d044146fd31333849831c817ba18800a473f5d\" pid:4733 exit_status:1 exited_at:{seconds:1747180861 nanos:599946356}" May 14 00:01:01.968638 containerd[1922]: time="2025-05-14T00:01:01.968584423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\" id:\"2c1bd5df4712f1f3653438a2a113d562109362498ad05013228698ded70047d0\" pid:4763 exit_status:1 exited_at:{seconds:1747180861 nanos:968031291}" May 14 00:01:02.823520 systemd-networkd[1826]: vxlan.calico: Gained IPv6LL May 14 00:01:02.929064 containerd[1922]: time="2025-05-14T00:01:02.927982833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-blz4x,Uid:647f705a-6d3d-4056-a3ed-afedaf4ccff2,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:03.925844 containerd[1922]: time="2025-05-14T00:01:03.925672004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7d9h,Uid:3b07b1c0-188b-40ea-92e9-aa86b8abc988,Namespace:kube-system,Attempt:0,}" May 14 00:01:03.926515 containerd[1922]: time="2025-05-14T00:01:03.925672003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcf7w,Uid:2a30ea9b-c110-4f18-9efa-bf49f1486efc,Namespace:calico-system,Attempt:0,}" May 14 00:01:04.115645 (udev-worker)[4497]: Network interface NamePolicy= disabled on kernel command line. May 14 00:01:04.121206 systemd-networkd[1826]: cali5a93202a97a: Link UP May 14 00:01:04.123003 systemd-networkd[1826]: cali5a93202a97a: Gained carrier May 14 00:01:04.162288 kubelet[3512]: I0514 00:01:04.158508 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tfwm7" podStartSLOduration=7.481736463 podStartE2EDuration="24.1569646s" podCreationTimestamp="2025-05-14 00:00:40 +0000 UTC" firstStartedPulling="2025-05-14 00:00:41.123425076 +0000 UTC m=+22.372202644" lastFinishedPulling="2025-05-14 00:00:57.798653222 +0000 UTC m=+39.047430781" observedRunningTime="2025-05-14 00:00:59.334686528 +0000 UTC m=+40.583464127" watchObservedRunningTime="2025-05-14 00:01:04.1569646 +0000 UTC m=+45.405742178" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:03.231 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0 calico-apiserver-598998cb9f- calico-apiserver 647f705a-6d3d-4056-a3ed-afedaf4ccff2 722 0 2025-05-14 00:00:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598998cb9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-86 calico-apiserver-598998cb9f-blz4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a93202a97a [] []}} ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:03.236 [INFO][4807] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:03.905 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" HandleID="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Workload="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.016 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" HandleID="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Workload="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ecd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-86", "pod":"calico-apiserver-598998cb9f-blz4x", "timestamp":"2025-05-14 00:01:03.905679097 +0000 UTC"}, Hostname:"ip-172-31-19-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.016 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.017 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.017 [INFO][4820] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-86' May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.024 [INFO][4820] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.047 [INFO][4820] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.063 [INFO][4820] ipam/ipam.go 489: Trying affinity for 192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.067 [INFO][4820] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.074 [INFO][4820] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.074 [INFO][4820] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.077 [INFO][4820] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8 May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.087 [INFO][4820] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.099 [INFO][4820] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.1/26] block=192.168.21.0/26 handle="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.099 [INFO][4820] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.1/26] handle="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" host="ip-172-31-19-86" May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.100 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:04.177554 containerd[1922]: 2025-05-14 00:01:04.100 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.1/26] IPv6=[] ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" HandleID="k8s-pod-network.3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Workload="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.182945 containerd[1922]: 2025-05-14 00:01:04.104 [INFO][4807] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0", GenerateName:"calico-apiserver-598998cb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"647f705a-6d3d-4056-a3ed-afedaf4ccff2", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598998cb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"", Pod:"calico-apiserver-598998cb9f-blz4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a93202a97a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:04.182945 containerd[1922]: 2025-05-14 00:01:04.104 [INFO][4807] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.1/32] ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.182945 containerd[1922]: 2025-05-14 00:01:04.105 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a93202a97a ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.182945 containerd[1922]: 2025-05-14 00:01:04.126 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.182945 containerd[1922]: 2025-05-14 00:01:04.131 [INFO][4807] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0", GenerateName:"calico-apiserver-598998cb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"647f705a-6d3d-4056-a3ed-afedaf4ccff2", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598998cb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8", Pod:"calico-apiserver-598998cb9f-blz4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a93202a97a", MAC:"12:bf:23:92:57:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:04.182945 containerd[1922]: 2025-05-14 00:01:04.160 [INFO][4807] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-blz4x" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--blz4x-eth0" May 14 00:01:04.309626 containerd[1922]: time="2025-05-14T00:01:04.309418431Z" level=info msg="connecting to shim 3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8" address="unix:///run/containerd/s/6fbf3d3132e4924342082e8beb137b5bf222d5a32c810c5869d2019bb459d6a1" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:04.369086 systemd-networkd[1826]: calie4c24dc2e91: Link UP May 14 00:01:04.369905 systemd-networkd[1826]: calie4c24dc2e91: Gained carrier May 14 00:01:04.394237 systemd[1]: Started cri-containerd-3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8.scope - libcontainer container 3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8. May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.083 [INFO][4827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0 csi-node-driver- calico-system 2a30ea9b-c110-4f18-9efa-bf49f1486efc 634 0 2025-05-14 00:00:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-86 csi-node-driver-kcf7w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4c24dc2e91 [] []}} ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.083 [INFO][4827] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.217 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" HandleID="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Workload="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.247 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" HandleID="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Workload="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000331080), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-86", "pod":"csi-node-driver-kcf7w", "timestamp":"2025-05-14 00:01:04.21774146 +0000 UTC"}, Hostname:"ip-172-31-19-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.247 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.247 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.247 [INFO][4851] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-86' May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.260 [INFO][4851] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.275 [INFO][4851] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.289 [INFO][4851] ipam/ipam.go 489: Trying affinity for 192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.292 [INFO][4851] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.300 [INFO][4851] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.300 [INFO][4851] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.305 [INFO][4851] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.331 [INFO][4851] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.346 [INFO][4851] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.2/26] block=192.168.21.0/26 handle="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.346 [INFO][4851] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.2/26] handle="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" host="ip-172-31-19-86" May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.347 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:04.427918 containerd[1922]: 2025-05-14 00:01:04.348 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.2/26] IPv6=[] ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" HandleID="k8s-pod-network.56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Workload="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.432412 containerd[1922]: 2025-05-14 00:01:04.356 [INFO][4827] cni-plugin/k8s.go 386: Populated endpoint ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a30ea9b-c110-4f18-9efa-bf49f1486efc", ResourceVersion:"634", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"", Pod:"csi-node-driver-kcf7w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4c24dc2e91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:04.432412 containerd[1922]: 2025-05-14 00:01:04.356 [INFO][4827] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.2/32] ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.432412 containerd[1922]: 2025-05-14 00:01:04.356 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4c24dc2e91 ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.432412 containerd[1922]: 2025-05-14 00:01:04.370 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.432412 containerd[1922]: 2025-05-14 00:01:04.376 [INFO][4827] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a30ea9b-c110-4f18-9efa-bf49f1486efc", ResourceVersion:"634", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd", Pod:"csi-node-driver-kcf7w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4c24dc2e91", MAC:"4a:16:2b:c1:3c:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:04.432412 containerd[1922]: 2025-05-14 00:01:04.415 [INFO][4827] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" Namespace="calico-system" Pod="csi-node-driver-kcf7w" WorkloadEndpoint="ip--172--31--19--86-k8s-csi--node--driver--kcf7w-eth0" May 14 00:01:04.508022 containerd[1922]: time="2025-05-14T00:01:04.507736267Z" level=info msg="connecting to shim 56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd" address="unix:///run/containerd/s/aa95021e27b40f412216a0f3c6499ae918eac2e0ea82bef706b5ae7f65af4376" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:04.550195 systemd-networkd[1826]: cali8026da67695: Link UP May 14 00:01:04.554631 systemd-networkd[1826]: cali8026da67695: Gained carrier May 14 00:01:04.597227 systemd[1]: Started cri-containerd-56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd.scope - libcontainer container 56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd. May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.131 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0 coredns-7db6d8ff4d- kube-system 3b07b1c0-188b-40ea-92e9-aa86b8abc988 718 0 2025-05-14 00:00:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-86 coredns-7db6d8ff4d-v7d9h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8026da67695 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.131 [INFO][4834] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.254 [INFO][4864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" HandleID="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Workload="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.298 [INFO][4864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" HandleID="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Workload="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311890), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-86", "pod":"coredns-7db6d8ff4d-v7d9h", "timestamp":"2025-05-14 00:01:04.254319736 +0000 UTC"}, Hostname:"ip-172-31-19-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.298 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.346 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.346 [INFO][4864] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-86' May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.352 [INFO][4864] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.373 [INFO][4864] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.416 [INFO][4864] ipam/ipam.go 489: Trying affinity for 192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.432 [INFO][4864] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.439 [INFO][4864] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.440 [INFO][4864] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.445 [INFO][4864] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79 May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.462 [INFO][4864] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.514 [INFO][4864] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.3/26] block=192.168.21.0/26 handle="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.514 [INFO][4864] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.3/26] handle="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" host="ip-172-31-19-86" May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.514 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:04.610067 containerd[1922]: 2025-05-14 00:01:04.514 [INFO][4864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.3/26] IPv6=[] ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" HandleID="k8s-pod-network.353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Workload="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.614908 containerd[1922]: 2025-05-14 00:01:04.524 [INFO][4834] cni-plugin/k8s.go 386: Populated endpoint ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b07b1c0-188b-40ea-92e9-aa86b8abc988", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"", Pod:"coredns-7db6d8ff4d-v7d9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8026da67695", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:04.614908 containerd[1922]: 2025-05-14 00:01:04.525 [INFO][4834] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.3/32] ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.614908 containerd[1922]: 2025-05-14 00:01:04.527 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8026da67695 ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.614908 containerd[1922]: 2025-05-14 00:01:04.555 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.614908 containerd[1922]: 2025-05-14 00:01:04.557 [INFO][4834] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b07b1c0-188b-40ea-92e9-aa86b8abc988", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79", Pod:"coredns-7db6d8ff4d-v7d9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8026da67695", MAC:"22:7d:1d:66:76:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:04.617999 containerd[1922]: 2025-05-14 00:01:04.603 [INFO][4834] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7d9h" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--v7d9h-eth0" May 14 00:01:04.669578 containerd[1922]: time="2025-05-14T00:01:04.668940427Z" level=info msg="connecting to shim 353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79" address="unix:///run/containerd/s/09b0e3cc7dc1f129bdb0a64f501cb1bb239cc681b77b7e47afb8884e4795eabd" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:04.726829 systemd[1]: Started cri-containerd-353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79.scope - libcontainer container 353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79. May 14 00:01:04.833682 containerd[1922]: time="2025-05-14T00:01:04.833073289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7d9h,Uid:3b07b1c0-188b-40ea-92e9-aa86b8abc988,Namespace:kube-system,Attempt:0,} returns sandbox id \"353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79\"" May 14 00:01:04.928124 containerd[1922]: time="2025-05-14T00:01:04.928071551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcf7w,Uid:2a30ea9b-c110-4f18-9efa-bf49f1486efc,Namespace:calico-system,Attempt:0,} returns sandbox id \"56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd\"" May 14 00:01:04.932011 containerd[1922]: time="2025-05-14T00:01:04.931916030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-blz4x,Uid:647f705a-6d3d-4056-a3ed-afedaf4ccff2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8\"" May 14 00:01:04.964402 containerd[1922]: time="2025-05-14T00:01:04.963548798Z" level=info msg="CreateContainer within sandbox \"353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:01:04.986918 containerd[1922]: time="2025-05-14T00:01:04.986281399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-9vhsk,Uid:428635f4-931f-4c1e-93c7-3e4ebe5febeb,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:04.997995 containerd[1922]: time="2025-05-14T00:01:04.997947635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:01:05.097760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108622985.mount: Deactivated successfully. May 14 00:01:05.122441 containerd[1922]: time="2025-05-14T00:01:05.120333176Z" level=info msg="Container e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:05.149357 containerd[1922]: time="2025-05-14T00:01:05.149071707Z" level=info msg="CreateContainer within sandbox \"353b04f363a5ce060c5a2ec8b54837a66012e6dfa1fabebc33c826f0841d2e79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7\"" May 14 00:01:05.153194 containerd[1922]: time="2025-05-14T00:01:05.151584813Z" level=info msg="StartContainer for \"e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7\"" May 14 00:01:05.156979 containerd[1922]: time="2025-05-14T00:01:05.156931736Z" level=info msg="connecting to shim e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7" address="unix:///run/containerd/s/09b0e3cc7dc1f129bdb0a64f501cb1bb239cc681b77b7e47afb8884e4795eabd" protocol=ttrpc version=3 May 14 00:01:05.200860 systemd[1]: Started cri-containerd-e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7.scope - libcontainer container e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7. May 14 00:01:05.358619 containerd[1922]: time="2025-05-14T00:01:05.358308617Z" level=info msg="StartContainer for \"e699dc331bd11528d90cfbf60d3b416434ee4702b3b7a63cb1d5da4a47e442d7\" returns successfully" May 14 00:01:05.450579 systemd-networkd[1826]: cali0f46649e442: Link UP May 14 00:01:05.450875 systemd-networkd[1826]: cali0f46649e442: Gained carrier May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.190 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0 calico-apiserver-598998cb9f- calico-apiserver 428635f4-931f-4c1e-93c7-3e4ebe5febeb 724 0 2025-05-14 00:00:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598998cb9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-86 calico-apiserver-598998cb9f-9vhsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f46649e442 [] []}} ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.191 [INFO][5039] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.339 [INFO][5072] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" HandleID="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Workload="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.371 [INFO][5072] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" HandleID="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Workload="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332b90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-86", "pod":"calico-apiserver-598998cb9f-9vhsk", "timestamp":"2025-05-14 00:01:05.339873707 +0000 UTC"}, Hostname:"ip-172-31-19-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.371 [INFO][5072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.371 [INFO][5072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.371 [INFO][5072] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-86' May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.374 [INFO][5072] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.381 [INFO][5072] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.392 [INFO][5072] ipam/ipam.go 489: Trying affinity for 192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.397 [INFO][5072] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.410 [INFO][5072] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.410 [INFO][5072] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.413 [INFO][5072] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503 May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.419 [INFO][5072] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.440 [INFO][5072] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.4/26] block=192.168.21.0/26 handle="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.440 [INFO][5072] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.4/26] handle="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" host="ip-172-31-19-86" May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.440 [INFO][5072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:05.475893 containerd[1922]: 2025-05-14 00:01:05.441 [INFO][5072] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.4/26] IPv6=[] ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" HandleID="k8s-pod-network.a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Workload="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.480142 containerd[1922]: 2025-05-14 00:01:05.444 [INFO][5039] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0", GenerateName:"calico-apiserver-598998cb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"428635f4-931f-4c1e-93c7-3e4ebe5febeb", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598998cb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"", Pod:"calico-apiserver-598998cb9f-9vhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f46649e442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:05.480142 containerd[1922]: 2025-05-14 00:01:05.445 [INFO][5039] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.4/32] ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.480142 containerd[1922]: 2025-05-14 00:01:05.445 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f46649e442 ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.480142 containerd[1922]: 2025-05-14 00:01:05.450 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.480142 containerd[1922]: 2025-05-14 00:01:05.451 [INFO][5039] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0", GenerateName:"calico-apiserver-598998cb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"428635f4-931f-4c1e-93c7-3e4ebe5febeb", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598998cb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503", Pod:"calico-apiserver-598998cb9f-9vhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f46649e442", MAC:"52:a3:16:34:ac:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:05.480142 containerd[1922]: 2025-05-14 00:01:05.467 [INFO][5039] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" Namespace="calico-apiserver" Pod="calico-apiserver-598998cb9f-9vhsk" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--apiserver--598998cb9f--9vhsk-eth0" May 14 00:01:05.528921 containerd[1922]: time="2025-05-14T00:01:05.528871700Z" level=info msg="connecting to shim a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503" address="unix:///run/containerd/s/3ba5c846f9eba605cd3f6a1b85df846b8ef00125614a4442f73154e88e06a22a" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:05.563067 systemd[1]: Started cri-containerd-a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503.scope - libcontainer container a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503. May 14 00:01:05.574983 systemd-networkd[1826]: cali5a93202a97a: Gained IPv6LL May 14 00:01:05.637225 containerd[1922]: time="2025-05-14T00:01:05.637167259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598998cb9f-9vhsk,Uid:428635f4-931f-4c1e-93c7-3e4ebe5febeb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503\"" May 14 00:01:05.926708 containerd[1922]: time="2025-05-14T00:01:05.926303975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6c7cb986-p92jl,Uid:71157674-7fe2-4e8f-bb49-022129d62a20,Namespace:calico-system,Attempt:0,}" May 14 00:01:05.927030 containerd[1922]: time="2025-05-14T00:01:05.926873417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8zd,Uid:da0fc96c-f65b-4c3d-8a2c-ddb48b838498,Namespace:kube-system,Attempt:0,}" May 14 00:01:05.959881 systemd-networkd[1826]: calie4c24dc2e91: Gained IPv6LL May 14 00:01:06.151355 systemd-networkd[1826]: cali8026da67695: Gained IPv6LL May 14 00:01:06.193685 systemd-networkd[1826]: calicebd614d33e: Link UP May 14 00:01:06.196424 systemd-networkd[1826]: calicebd614d33e: Gained carrier May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.020 [INFO][5149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0 calico-kube-controllers-7f6c7cb986- calico-system 71157674-7fe2-4e8f-bb49-022129d62a20 721 0 2025-05-14 00:00:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f6c7cb986 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-86 calico-kube-controllers-7f6c7cb986-p92jl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicebd614d33e [] []}} ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.020 [INFO][5149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.101 [INFO][5172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" HandleID="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Workload="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.119 [INFO][5172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" HandleID="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Workload="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d710), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-86", "pod":"calico-kube-controllers-7f6c7cb986-p92jl", "timestamp":"2025-05-14 00:01:06.101732703 +0000 UTC"}, Hostname:"ip-172-31-19-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.119 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.119 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.119 [INFO][5172] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-86' May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.126 [INFO][5172] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.133 [INFO][5172] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.142 [INFO][5172] ipam/ipam.go 489: Trying affinity for 192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.145 [INFO][5172] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.149 [INFO][5172] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.149 [INFO][5172] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.154 [INFO][5172] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.161 [INFO][5172] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.174 [INFO][5172] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.5/26] block=192.168.21.0/26 handle="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.174 [INFO][5172] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.5/26] handle="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" host="ip-172-31-19-86" May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.174 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:06.233936 containerd[1922]: 2025-05-14 00:01:06.174 [INFO][5172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.5/26] IPv6=[] ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" HandleID="k8s-pod-network.b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Workload="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.235447 containerd[1922]: 2025-05-14 00:01:06.182 [INFO][5149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0", GenerateName:"calico-kube-controllers-7f6c7cb986-", Namespace:"calico-system", SelfLink:"", UID:"71157674-7fe2-4e8f-bb49-022129d62a20", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6c7cb986", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"", Pod:"calico-kube-controllers-7f6c7cb986-p92jl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicebd614d33e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:06.235447 containerd[1922]: 2025-05-14 00:01:06.183 [INFO][5149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.5/32] ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.235447 containerd[1922]: 2025-05-14 00:01:06.184 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicebd614d33e ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.235447 containerd[1922]: 2025-05-14 00:01:06.198 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.235447 containerd[1922]: 2025-05-14 00:01:06.202 [INFO][5149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0", GenerateName:"calico-kube-controllers-7f6c7cb986-", Namespace:"calico-system", SelfLink:"", UID:"71157674-7fe2-4e8f-bb49-022129d62a20", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6c7cb986", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae", Pod:"calico-kube-controllers-7f6c7cb986-p92jl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicebd614d33e", MAC:"46:3c:c1:34:64:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:06.235792 containerd[1922]: 2025-05-14 00:01:06.228 [INFO][5149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" Namespace="calico-system" Pod="calico-kube-controllers-7f6c7cb986-p92jl" WorkloadEndpoint="ip--172--31--19--86-k8s-calico--kube--controllers--7f6c7cb986--p92jl-eth0" May 14 00:01:06.281059 systemd-networkd[1826]: cali67c3b51e9a8: Link UP May 14 00:01:06.281320 systemd-networkd[1826]: cali67c3b51e9a8: Gained carrier May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.038 [INFO][5153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0 coredns-7db6d8ff4d- kube-system da0fc96c-f65b-4c3d-8a2c-ddb48b838498 723 0 2025-05-14 00:00:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-86 coredns-7db6d8ff4d-bh8zd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67c3b51e9a8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.038 [INFO][5153] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.119 [INFO][5177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" HandleID="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Workload="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.132 [INFO][5177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" HandleID="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Workload="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311360), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-86", "pod":"coredns-7db6d8ff4d-bh8zd", "timestamp":"2025-05-14 00:01:06.119432885 +0000 UTC"}, Hostname:"ip-172-31-19-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.133 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.174 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.175 [INFO][5177] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-86' May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.181 [INFO][5177] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.191 [INFO][5177] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.215 [INFO][5177] ipam/ipam.go 489: Trying affinity for 192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.218 [INFO][5177] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.230 [INFO][5177] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.230 [INFO][5177] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.236 [INFO][5177] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392 May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.249 [INFO][5177] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.265 [INFO][5177] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.6/26] block=192.168.21.0/26 handle="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.265 [INFO][5177] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.6/26] handle="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" host="ip-172-31-19-86" May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.265 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:06.310839 containerd[1922]: 2025-05-14 00:01:06.265 [INFO][5177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.6/26] IPv6=[] ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" HandleID="k8s-pod-network.39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Workload="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.315549 containerd[1922]: 2025-05-14 00:01:06.275 [INFO][5153] cni-plugin/k8s.go 386: Populated endpoint ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"da0fc96c-f65b-4c3d-8a2c-ddb48b838498", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"", Pod:"coredns-7db6d8ff4d-bh8zd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67c3b51e9a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:06.315549 containerd[1922]: 2025-05-14 00:01:06.275 [INFO][5153] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.6/32] ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.315549 containerd[1922]: 2025-05-14 00:01:06.275 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67c3b51e9a8 ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.315549 containerd[1922]: 2025-05-14 00:01:06.278 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.315549 containerd[1922]: 2025-05-14 00:01:06.278 [INFO][5153] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"da0fc96c-f65b-4c3d-8a2c-ddb48b838498", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-86", ContainerID:"39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392", Pod:"coredns-7db6d8ff4d-bh8zd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67c3b51e9a8", MAC:"66:40:35:47:95:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:06.315887 containerd[1922]: 2025-05-14 00:01:06.299 [INFO][5153] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8zd" WorkloadEndpoint="ip--172--31--19--86-k8s-coredns--7db6d8ff4d--bh8zd-eth0" May 14 00:01:06.338069 systemd[1]: Started sshd@11-172.31.19.86:22-147.75.109.163:34136.service - OpenSSH per-connection server daemon (147.75.109.163:34136). May 14 00:01:06.391741 containerd[1922]: time="2025-05-14T00:01:06.391702924Z" level=info msg="connecting to shim b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae" address="unix:///run/containerd/s/ce16f64886f38b72940460251c6b32d53ff149283d286cf3096190114de376d9" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:06.468490 kubelet[3512]: I0514 00:01:06.465738 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v7d9h" podStartSLOduration=33.46569282 podStartE2EDuration="33.46569282s" podCreationTimestamp="2025-05-14 00:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:06.463815834 +0000 UTC m=+47.712593412" watchObservedRunningTime="2025-05-14 00:01:06.46569282 +0000 UTC m=+47.714470399" May 14 00:01:06.515183 systemd[1]: Started cri-containerd-b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae.scope - libcontainer container b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae. May 14 00:01:06.564175 containerd[1922]: time="2025-05-14T00:01:06.564126398Z" level=info msg="connecting to shim 39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392" address="unix:///run/containerd/s/147421c8d194a138250b33dd44ee180c8ecd78d3fc6439ab755035f55b944712" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:06.653322 systemd[1]: Started cri-containerd-39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392.scope - libcontainer container 39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392. May 14 00:01:06.654170 sshd[5216]: Accepted publickey for core from 147.75.109.163 port 34136 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:06.660527 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:06.687551 systemd-logind[1897]: New session 12 of user core. May 14 00:01:06.692304 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:01:06.736349 containerd[1922]: time="2025-05-14T00:01:06.736079650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6c7cb986-p92jl,Uid:71157674-7fe2-4e8f-bb49-022129d62a20,Namespace:calico-system,Attempt:0,} returns sandbox id \"b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae\"" May 14 00:01:06.782436 containerd[1922]: time="2025-05-14T00:01:06.781741580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8zd,Uid:da0fc96c-f65b-4c3d-8a2c-ddb48b838498,Namespace:kube-system,Attempt:0,} returns sandbox id \"39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392\"" May 14 00:01:06.787185 containerd[1922]: time="2025-05-14T00:01:06.787055586Z" level=info msg="CreateContainer within sandbox \"39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:01:06.806101 containerd[1922]: time="2025-05-14T00:01:06.806042453Z" level=info msg="Container 61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:06.825432 containerd[1922]: time="2025-05-14T00:01:06.825240087Z" level=info msg="CreateContainer within sandbox \"39976b9a4c70d254033d94b1be7620c87614a6b960158ef1e935d96f97a09392\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593\"" May 14 00:01:06.829614 containerd[1922]: time="2025-05-14T00:01:06.828291142Z" level=info msg="StartContainer for \"61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593\"" May 14 00:01:06.833837 containerd[1922]: time="2025-05-14T00:01:06.833738687Z" level=info msg="connecting to shim 61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593" address="unix:///run/containerd/s/147421c8d194a138250b33dd44ee180c8ecd78d3fc6439ab755035f55b944712" protocol=ttrpc version=3 May 14 00:01:06.884042 systemd[1]: Started cri-containerd-61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593.scope - libcontainer container 61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593. May 14 00:01:07.005911 containerd[1922]: time="2025-05-14T00:01:07.005647899Z" level=info msg="StartContainer for \"61f70f46ab2e86304097cde938bccea27e60fcf524f14291684bc48c5dac8593\" returns successfully" May 14 00:01:07.010958 containerd[1922]: time="2025-05-14T00:01:07.010815904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.015353 containerd[1922]: time="2025-05-14T00:01:07.014610293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 00:01:07.018287 containerd[1922]: time="2025-05-14T00:01:07.018013651Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.029059 containerd[1922]: time="2025-05-14T00:01:07.027871578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.02986593s" May 14 00:01:07.029059 containerd[1922]: time="2025-05-14T00:01:07.027920345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 00:01:07.038593 containerd[1922]: time="2025-05-14T00:01:07.036947136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:01:07.043909 containerd[1922]: time="2025-05-14T00:01:07.040505590Z" level=info msg="CreateContainer within sandbox \"56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:01:07.043909 containerd[1922]: time="2025-05-14T00:01:07.043233812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.087757 containerd[1922]: time="2025-05-14T00:01:07.087712746Z" level=info msg="Container 1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:07.108887 containerd[1922]: time="2025-05-14T00:01:07.108832620Z" level=info msg="CreateContainer within sandbox \"56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e\"" May 14 00:01:07.112566 containerd[1922]: time="2025-05-14T00:01:07.112524042Z" level=info msg="StartContainer for \"1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e\"" May 14 00:01:07.115557 containerd[1922]: time="2025-05-14T00:01:07.114950482Z" level=info msg="connecting to shim 1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e" address="unix:///run/containerd/s/aa95021e27b40f412216a0f3c6499ae918eac2e0ea82bef706b5ae7f65af4376" protocol=ttrpc version=3 May 14 00:01:07.166068 systemd[1]: Started cri-containerd-1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e.scope - libcontainer container 1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e. May 14 00:01:07.243077 containerd[1922]: time="2025-05-14T00:01:07.243033014Z" level=info msg="StartContainer for \"1f145dfa5d5488e783cbaec1b737033b20cab8a2e89595f75e31b23d31f55b0e\" returns successfully" May 14 00:01:07.303969 systemd-networkd[1826]: cali0f46649e442: Gained IPv6LL May 14 00:01:07.382678 sshd[5308]: Connection closed by 147.75.109.163 port 34136 May 14 00:01:07.384362 sshd-session[5216]: pam_unix(sshd:session): session closed for user core May 14 00:01:07.388706 systemd[1]: sshd@11-172.31.19.86:22-147.75.109.163:34136.service: Deactivated successfully. May 14 00:01:07.391496 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:01:07.393715 systemd-logind[1897]: Session 12 logged out. Waiting for processes to exit. May 14 00:01:07.395615 systemd-logind[1897]: Removed session 12. May 14 00:01:07.415457 systemd[1]: Started sshd@12-172.31.19.86:22-147.75.109.163:34138.service - OpenSSH per-connection server daemon (147.75.109.163:34138). May 14 00:01:07.479626 kubelet[3512]: I0514 00:01:07.477799 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bh8zd" podStartSLOduration=34.477761622 podStartE2EDuration="34.477761622s" podCreationTimestamp="2025-05-14 00:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:07.449907544 +0000 UTC m=+48.698685127" watchObservedRunningTime="2025-05-14 00:01:07.477761622 +0000 UTC m=+48.726539199" May 14 00:01:07.610956 sshd[5392]: Accepted publickey for core from 147.75.109.163 port 34138 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:07.615001 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:07.621966 systemd-logind[1897]: New session 13 of user core. May 14 00:01:07.626149 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:01:07.918917 sshd[5400]: Connection closed by 147.75.109.163 port 34138 May 14 00:01:07.919289 sshd-session[5392]: pam_unix(sshd:session): session closed for user core May 14 00:01:07.926390 systemd[1]: sshd@12-172.31.19.86:22-147.75.109.163:34138.service: Deactivated successfully. May 14 00:01:07.930100 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:01:07.936993 systemd-logind[1897]: Session 13 logged out. Waiting for processes to exit. May 14 00:01:07.953662 systemd[1]: Started sshd@13-172.31.19.86:22-147.75.109.163:34146.service - OpenSSH per-connection server daemon (147.75.109.163:34146). May 14 00:01:07.956909 systemd-logind[1897]: Removed session 13. May 14 00:01:08.070964 systemd-networkd[1826]: calicebd614d33e: Gained IPv6LL May 14 00:01:08.162071 sshd[5415]: Accepted publickey for core from 147.75.109.163 port 34146 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:08.163029 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:08.169913 systemd-logind[1897]: New session 14 of user core. May 14 00:01:08.174609 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:01:08.329064 systemd-networkd[1826]: cali67c3b51e9a8: Gained IPv6LL May 14 00:01:08.517124 sshd[5418]: Connection closed by 147.75.109.163 port 34146 May 14 00:01:08.519049 sshd-session[5415]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.528427 systemd[1]: sshd@13-172.31.19.86:22-147.75.109.163:34146.service: Deactivated successfully. May 14 00:01:08.531346 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:01:08.534151 systemd-logind[1897]: Session 14 logged out. Waiting for processes to exit. May 14 00:01:08.536274 systemd-logind[1897]: Removed session 14. May 14 00:01:09.591681 containerd[1922]: time="2025-05-14T00:01:09.591515093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.593015 containerd[1922]: time="2025-05-14T00:01:09.592798708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 00:01:09.594306 containerd[1922]: time="2025-05-14T00:01:09.594100073Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.596582 containerd[1922]: time="2025-05-14T00:01:09.596545680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.597297 containerd[1922]: time="2025-05-14T00:01:09.597269481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.560277051s" May 14 00:01:09.597478 containerd[1922]: time="2025-05-14T00:01:09.597394187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:01:09.598830 containerd[1922]: time="2025-05-14T00:01:09.598561850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:01:09.600656 containerd[1922]: time="2025-05-14T00:01:09.600603557Z" level=info msg="CreateContainer within sandbox \"3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:01:09.612448 containerd[1922]: time="2025-05-14T00:01:09.611959028Z" level=info msg="Container 53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:09.623959 containerd[1922]: time="2025-05-14T00:01:09.623919452Z" level=info msg="CreateContainer within sandbox \"3f030f987dc8f3e52d0117875f7247ef80393bc6dc6dd9fc2db9ff1227ab1ee8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b\"" May 14 00:01:09.624706 containerd[1922]: time="2025-05-14T00:01:09.624676201Z" level=info msg="StartContainer for \"53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b\"" May 14 00:01:09.625783 containerd[1922]: time="2025-05-14T00:01:09.625729086Z" level=info msg="connecting to shim 53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b" address="unix:///run/containerd/s/6fbf3d3132e4924342082e8beb137b5bf222d5a32c810c5869d2019bb459d6a1" protocol=ttrpc version=3 May 14 00:01:09.651974 systemd[1]: Started cri-containerd-53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b.scope - libcontainer container 53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b. May 14 00:01:09.762825 containerd[1922]: time="2025-05-14T00:01:09.762679821Z" level=info msg="StartContainer for \"53793cc7428eae1a14581f4aca8abc6b449f9826543b3c6433a697706db4bf5b\" returns successfully" May 14 00:01:09.973909 containerd[1922]: time="2025-05-14T00:01:09.973853393Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.975742 containerd[1922]: time="2025-05-14T00:01:09.975674694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 00:01:09.985049 containerd[1922]: time="2025-05-14T00:01:09.984983144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 386.382269ms" May 14 00:01:09.985049 containerd[1922]: time="2025-05-14T00:01:09.985050653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:01:09.986588 containerd[1922]: time="2025-05-14T00:01:09.986537215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:01:09.992397 containerd[1922]: time="2025-05-14T00:01:09.992351849Z" level=info msg="CreateContainer within sandbox \"a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:01:10.007315 containerd[1922]: time="2025-05-14T00:01:10.007264291Z" level=info msg="Container 5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:10.027713 containerd[1922]: time="2025-05-14T00:01:10.027608798Z" level=info msg="CreateContainer within sandbox \"a629d24ca1694a713b894f276db24186aac1222c7c6053395c4a9597511a6503\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a\"" May 14 00:01:10.030071 containerd[1922]: time="2025-05-14T00:01:10.030029137Z" level=info msg="StartContainer for \"5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a\"" May 14 00:01:10.035309 containerd[1922]: time="2025-05-14T00:01:10.035258877Z" level=info msg="connecting to shim 5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a" address="unix:///run/containerd/s/3ba5c846f9eba605cd3f6a1b85df846b8ef00125614a4442f73154e88e06a22a" protocol=ttrpc version=3 May 14 00:01:10.070057 systemd[1]: Started cri-containerd-5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a.scope - libcontainer container 5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a. May 14 00:01:10.164658 containerd[1922]: time="2025-05-14T00:01:10.164240613Z" level=info msg="StartContainer for \"5b44405747b9f8fca94e33c31f9ebb666ada0ce84fe2734b83994fd6e5bd0f4a\" returns successfully" May 14 00:01:10.481535 kubelet[3512]: I0514 00:01:10.481455 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598998cb9f-blz4x" podStartSLOduration=25.87214095 podStartE2EDuration="30.481316536s" podCreationTimestamp="2025-05-14 00:00:40 +0000 UTC" firstStartedPulling="2025-05-14 00:01:04.989246613 +0000 UTC m=+46.238024182" lastFinishedPulling="2025-05-14 00:01:09.598422197 +0000 UTC m=+50.847199768" observedRunningTime="2025-05-14 00:01:10.480444767 +0000 UTC m=+51.729222345" watchObservedRunningTime="2025-05-14 00:01:10.481316536 +0000 UTC m=+51.730094122" May 14 00:01:10.507420 kubelet[3512]: I0514 00:01:10.507091 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598998cb9f-9vhsk" podStartSLOduration=26.160527297 podStartE2EDuration="30.50706027s" podCreationTimestamp="2025-05-14 00:00:40 +0000 UTC" firstStartedPulling="2025-05-14 00:01:05.639664964 +0000 UTC m=+46.888442535" lastFinishedPulling="2025-05-14 00:01:09.98619794 +0000 UTC m=+51.234975508" observedRunningTime="2025-05-14 00:01:10.50689864 +0000 UTC m=+51.755676217" watchObservedRunningTime="2025-05-14 00:01:10.50706027 +0000 UTC m=+51.755837848" May 14 00:01:10.950883 ntpd[1889]: Listen normally on 7 vxlan.calico 192.168.21.0:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 7 vxlan.calico 192.168.21.0:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 8 vxlan.calico [fe80::6409:68ff:fe12:4a76%4]:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 9 cali5a93202a97a [fe80::ecee:eeff:feee:eeee%7]:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 10 calie4c24dc2e91 [fe80::ecee:eeff:feee:eeee%8]:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 11 cali8026da67695 [fe80::ecee:eeff:feee:eeee%9]:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 12 cali0f46649e442 [fe80::ecee:eeff:feee:eeee%10]:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 13 calicebd614d33e [fe80::ecee:eeff:feee:eeee%11]:123 May 14 00:01:10.954870 ntpd[1889]: 14 May 00:01:10 ntpd[1889]: Listen normally on 14 cali67c3b51e9a8 [fe80::ecee:eeff:feee:eeee%12]:123 May 14 00:01:10.950971 ntpd[1889]: Listen normally on 8 vxlan.calico [fe80::6409:68ff:fe12:4a76%4]:123 May 14 00:01:10.951024 ntpd[1889]: Listen normally on 9 cali5a93202a97a [fe80::ecee:eeff:feee:eeee%7]:123 May 14 00:01:10.951060 ntpd[1889]: Listen normally on 10 calie4c24dc2e91 [fe80::ecee:eeff:feee:eeee%8]:123 May 14 00:01:10.951097 ntpd[1889]: Listen normally on 11 cali8026da67695 [fe80::ecee:eeff:feee:eeee%9]:123 May 14 00:01:10.951136 ntpd[1889]: Listen normally on 12 cali0f46649e442 [fe80::ecee:eeff:feee:eeee%10]:123 May 14 00:01:10.951173 ntpd[1889]: Listen normally on 13 calicebd614d33e [fe80::ecee:eeff:feee:eeee%11]:123 May 14 00:01:10.951208 ntpd[1889]: Listen normally on 14 cali67c3b51e9a8 [fe80::ecee:eeff:feee:eeee%12]:123 May 14 00:01:11.456443 kubelet[3512]: I0514 00:01:11.456411 3512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:11.459078 kubelet[3512]: I0514 00:01:11.458047 3512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:12.844443 containerd[1922]: time="2025-05-14T00:01:12.843567106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:12.846669 containerd[1922]: time="2025-05-14T00:01:12.846587833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 00:01:12.848039 containerd[1922]: time="2025-05-14T00:01:12.847945893Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:12.857951 containerd[1922]: time="2025-05-14T00:01:12.857870127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:12.858878 containerd[1922]: time="2025-05-14T00:01:12.858837113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.872234151s" May 14 00:01:12.859431 containerd[1922]: time="2025-05-14T00:01:12.858885277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 00:01:12.860645 containerd[1922]: time="2025-05-14T00:01:12.860619128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:01:12.909792 containerd[1922]: time="2025-05-14T00:01:12.907474121Z" level=info msg="CreateContainer within sandbox \"b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:01:12.935971 containerd[1922]: time="2025-05-14T00:01:12.934929134Z" level=info msg="Container b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:12.959788 containerd[1922]: time="2025-05-14T00:01:12.958035217Z" level=info msg="CreateContainer within sandbox \"b21d47a64ba4bd0174baa9cacae6efc68291665fb6f23804e7be990f4a3884ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\"" May 14 00:01:12.963008 containerd[1922]: time="2025-05-14T00:01:12.962964876Z" level=info msg="StartContainer for \"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\"" May 14 00:01:12.970284 containerd[1922]: time="2025-05-14T00:01:12.970232624Z" level=info msg="connecting to shim b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29" address="unix:///run/containerd/s/ce16f64886f38b72940460251c6b32d53ff149283d286cf3096190114de376d9" protocol=ttrpc version=3 May 14 00:01:13.015753 systemd[1]: Started cri-containerd-b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29.scope - libcontainer container b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29. May 14 00:01:13.149475 containerd[1922]: time="2025-05-14T00:01:13.149428131Z" level=info msg="StartContainer for \"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" returns successfully" May 14 00:01:13.513532 kubelet[3512]: I0514 00:01:13.513373 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f6c7cb986-p92jl" podStartSLOduration=27.399189432 podStartE2EDuration="33.51334898s" podCreationTimestamp="2025-05-14 00:00:40 +0000 UTC" firstStartedPulling="2025-05-14 00:01:06.745991529 +0000 UTC m=+47.994769088" lastFinishedPulling="2025-05-14 00:01:12.860151077 +0000 UTC m=+54.108928636" observedRunningTime="2025-05-14 00:01:13.512851753 +0000 UTC m=+54.761629334" watchObservedRunningTime="2025-05-14 00:01:13.51334898 +0000 UTC m=+54.762126559" May 14 00:01:13.554795 systemd[1]: Started sshd@14-172.31.19.86:22-147.75.109.163:40256.service - OpenSSH per-connection server daemon (147.75.109.163:40256). May 14 00:01:13.577138 containerd[1922]: time="2025-05-14T00:01:13.577077332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" id:\"ae1f1b0c331f3606cdff6ba0fc4798046d9c8bb27bcb5d62425fafcb05e87270\" pid:5575 exited_at:{seconds:1747180873 nanos:574426812}" May 14 00:01:13.806646 sshd[5582]: Accepted publickey for core from 147.75.109.163 port 40256 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:13.809545 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:13.817037 systemd-logind[1897]: New session 15 of user core. May 14 00:01:13.822436 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:01:14.661603 sshd[5587]: Connection closed by 147.75.109.163 port 40256 May 14 00:01:14.673195 sshd-session[5582]: pam_unix(sshd:session): session closed for user core May 14 00:01:14.684742 systemd[1]: sshd@14-172.31.19.86:22-147.75.109.163:40256.service: Deactivated successfully. May 14 00:01:14.688234 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:01:14.690538 systemd-logind[1897]: Session 15 logged out. Waiting for processes to exit. May 14 00:01:14.692240 systemd-logind[1897]: Removed session 15. May 14 00:01:15.463172 containerd[1922]: time="2025-05-14T00:01:15.463112195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:15.466841 containerd[1922]: time="2025-05-14T00:01:15.466524995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 00:01:15.469376 containerd[1922]: time="2025-05-14T00:01:15.469320027Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:15.472743 containerd[1922]: time="2025-05-14T00:01:15.472683741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:15.473354 containerd[1922]: time="2025-05-14T00:01:15.473323753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.612480228s" May 14 00:01:15.473354 containerd[1922]: time="2025-05-14T00:01:15.473354861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 00:01:15.480163 containerd[1922]: time="2025-05-14T00:01:15.480122515Z" level=info msg="CreateContainer within sandbox \"56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:01:15.500383 containerd[1922]: time="2025-05-14T00:01:15.499407518Z" level=info msg="Container bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:15.535805 containerd[1922]: time="2025-05-14T00:01:15.535718828Z" level=info msg="CreateContainer within sandbox \"56a95712796fb5a7b959d0f52844f09e0e89c27a8ec2236e77c7cf87b1bc5bcd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70\"" May 14 00:01:15.536567 containerd[1922]: time="2025-05-14T00:01:15.536525323Z" level=info msg="StartContainer for \"bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70\"" May 14 00:01:15.538089 containerd[1922]: time="2025-05-14T00:01:15.537916071Z" level=info msg="connecting to shim bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70" address="unix:///run/containerd/s/aa95021e27b40f412216a0f3c6499ae918eac2e0ea82bef706b5ae7f65af4376" protocol=ttrpc version=3 May 14 00:01:15.568271 systemd[1]: Started cri-containerd-bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70.scope - libcontainer container bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70. May 14 00:01:15.631514 containerd[1922]: time="2025-05-14T00:01:15.631468788Z" level=info msg="StartContainer for \"bbbaaa0f265b5b0ec4319bd266a0ec67128c41d78f175858ac488567a5e9ab70\" returns successfully" May 14 00:01:16.366121 kubelet[3512]: I0514 00:01:16.366063 3512 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:01:16.376123 kubelet[3512]: I0514 00:01:16.376076 3512 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:01:16.515096 kubelet[3512]: I0514 00:01:16.515024 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kcf7w" podStartSLOduration=26.028974445 podStartE2EDuration="36.515000022s" podCreationTimestamp="2025-05-14 00:00:40 +0000 UTC" firstStartedPulling="2025-05-14 00:01:04.988488184 +0000 UTC m=+46.237265749" lastFinishedPulling="2025-05-14 00:01:15.474513767 +0000 UTC m=+56.723291326" observedRunningTime="2025-05-14 00:01:16.514389046 +0000 UTC m=+57.763166626" watchObservedRunningTime="2025-05-14 00:01:16.515000022 +0000 UTC m=+57.763777597" May 14 00:01:19.699319 systemd[1]: Started sshd@15-172.31.19.86:22-147.75.109.163:45522.service - OpenSSH per-connection server daemon (147.75.109.163:45522). May 14 00:01:19.923738 sshd[5639]: Accepted publickey for core from 147.75.109.163 port 45522 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:19.925071 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:19.932369 systemd-logind[1897]: New session 16 of user core. May 14 00:01:19.942050 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:01:20.432016 sshd[5641]: Connection closed by 147.75.109.163 port 45522 May 14 00:01:20.432479 sshd-session[5639]: pam_unix(sshd:session): session closed for user core May 14 00:01:20.450664 systemd-logind[1897]: Session 16 logged out. Waiting for processes to exit. May 14 00:01:20.451113 systemd[1]: sshd@15-172.31.19.86:22-147.75.109.163:45522.service: Deactivated successfully. May 14 00:01:20.453912 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:01:20.455046 systemd-logind[1897]: Removed session 16. May 14 00:01:20.830579 containerd[1922]: time="2025-05-14T00:01:20.830146308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" id:\"66a1bafac5e77d0443671fe82ccd8130adc95b7059431f18272ec72ae23b2812\" pid:5664 exited_at:{seconds:1747180880 nanos:829591082}" May 14 00:01:22.093845 kubelet[3512]: I0514 00:01:22.093183 3512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:25.460553 systemd[1]: Started sshd@16-172.31.19.86:22-147.75.109.163:45526.service - OpenSSH per-connection server daemon (147.75.109.163:45526). May 14 00:01:25.683442 sshd[5682]: Accepted publickey for core from 147.75.109.163 port 45526 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:25.685493 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:25.691601 systemd-logind[1897]: New session 17 of user core. May 14 00:01:25.695949 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:01:26.197373 sshd[5684]: Connection closed by 147.75.109.163 port 45526 May 14 00:01:26.198004 sshd-session[5682]: pam_unix(sshd:session): session closed for user core May 14 00:01:26.201113 systemd[1]: sshd@16-172.31.19.86:22-147.75.109.163:45526.service: Deactivated successfully. May 14 00:01:26.203342 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:01:26.205512 systemd-logind[1897]: Session 17 logged out. Waiting for processes to exit. May 14 00:01:26.206621 systemd-logind[1897]: Removed session 17. May 14 00:01:31.232938 systemd[1]: Started sshd@17-172.31.19.86:22-147.75.109.163:40596.service - OpenSSH per-connection server daemon (147.75.109.163:40596). May 14 00:01:31.386617 containerd[1922]: time="2025-05-14T00:01:31.386575514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\" id:\"26a05aab045c7326d7e7af6d9ecc5b58377813d61ae9624e2bcb5bf22721bc5c\" pid:5712 exited_at:{seconds:1747180891 nanos:386192137}" May 14 00:01:31.410031 sshd[5697]: Accepted publickey for core from 147.75.109.163 port 40596 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:31.411420 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:31.416852 systemd-logind[1897]: New session 18 of user core. May 14 00:01:31.422127 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:01:31.648141 sshd[5724]: Connection closed by 147.75.109.163 port 40596 May 14 00:01:31.648594 sshd-session[5697]: pam_unix(sshd:session): session closed for user core May 14 00:01:31.652363 systemd[1]: sshd@17-172.31.19.86:22-147.75.109.163:40596.service: Deactivated successfully. May 14 00:01:31.655877 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:01:31.657215 systemd-logind[1897]: Session 18 logged out. Waiting for processes to exit. May 14 00:01:31.658529 systemd-logind[1897]: Removed session 18. May 14 00:01:31.678504 systemd[1]: Started sshd@18-172.31.19.86:22-147.75.109.163:40612.service - OpenSSH per-connection server daemon (147.75.109.163:40612). May 14 00:01:31.852878 sshd[5736]: Accepted publickey for core from 147.75.109.163 port 40612 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:31.854609 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:31.860494 systemd-logind[1897]: New session 19 of user core. May 14 00:01:31.871043 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:01:32.550845 sshd[5738]: Connection closed by 147.75.109.163 port 40612 May 14 00:01:32.552253 sshd-session[5736]: pam_unix(sshd:session): session closed for user core May 14 00:01:32.556385 systemd[1]: sshd@18-172.31.19.86:22-147.75.109.163:40612.service: Deactivated successfully. May 14 00:01:32.559493 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:01:32.560554 systemd-logind[1897]: Session 19 logged out. Waiting for processes to exit. May 14 00:01:32.562583 systemd-logind[1897]: Removed session 19. May 14 00:01:32.579394 systemd[1]: Started sshd@19-172.31.19.86:22-147.75.109.163:40626.service - OpenSSH per-connection server daemon (147.75.109.163:40626). May 14 00:01:32.774728 sshd[5748]: Accepted publickey for core from 147.75.109.163 port 40626 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:32.776215 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:32.782510 systemd-logind[1897]: New session 20 of user core. May 14 00:01:32.792026 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:01:33.918548 containerd[1922]: time="2025-05-14T00:01:33.918498388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" id:\"56bc4c1c4f93e438f36228be6e14574401ff59a605a650d796c897e758cd7565\" pid:5771 exited_at:{seconds:1747180893 nanos:917917301}" May 14 00:01:35.193064 sshd[5750]: Connection closed by 147.75.109.163 port 40626 May 14 00:01:35.194540 sshd-session[5748]: pam_unix(sshd:session): session closed for user core May 14 00:01:35.203509 systemd[1]: sshd@19-172.31.19.86:22-147.75.109.163:40626.service: Deactivated successfully. May 14 00:01:35.205668 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:01:35.207495 systemd[1]: session-20.scope: Consumed 643ms CPU time, 66.4M memory peak. May 14 00:01:35.210471 systemd-logind[1897]: Session 20 logged out. Waiting for processes to exit. May 14 00:01:35.212779 systemd-logind[1897]: Removed session 20. May 14 00:01:35.230778 systemd[1]: Started sshd@20-172.31.19.86:22-147.75.109.163:40632.service - OpenSSH per-connection server daemon (147.75.109.163:40632). May 14 00:01:35.439827 sshd[5791]: Accepted publickey for core from 147.75.109.163 port 40632 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:35.444235 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:35.452417 systemd-logind[1897]: New session 21 of user core. May 14 00:01:35.455946 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:01:36.490082 sshd[5793]: Connection closed by 147.75.109.163 port 40632 May 14 00:01:36.491003 sshd-session[5791]: pam_unix(sshd:session): session closed for user core May 14 00:01:36.495242 systemd[1]: sshd@20-172.31.19.86:22-147.75.109.163:40632.service: Deactivated successfully. May 14 00:01:36.499011 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:01:36.500747 systemd-logind[1897]: Session 21 logged out. Waiting for processes to exit. May 14 00:01:36.503108 systemd-logind[1897]: Removed session 21. May 14 00:01:36.522507 systemd[1]: Started sshd@21-172.31.19.86:22-147.75.109.163:40640.service - OpenSSH per-connection server daemon (147.75.109.163:40640). May 14 00:01:36.724814 sshd[5803]: Accepted publickey for core from 147.75.109.163 port 40640 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:36.726551 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:36.733018 systemd-logind[1897]: New session 22 of user core. May 14 00:01:36.736980 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:01:37.000579 sshd[5805]: Connection closed by 147.75.109.163 port 40640 May 14 00:01:37.001036 sshd-session[5803]: pam_unix(sshd:session): session closed for user core May 14 00:01:37.008169 systemd[1]: sshd@21-172.31.19.86:22-147.75.109.163:40640.service: Deactivated successfully. May 14 00:01:37.013567 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:01:37.015015 systemd-logind[1897]: Session 22 logged out. Waiting for processes to exit. May 14 00:01:37.016666 systemd-logind[1897]: Removed session 22. May 14 00:01:42.039243 systemd[1]: Started sshd@22-172.31.19.86:22-147.75.109.163:40574.service - OpenSSH per-connection server daemon (147.75.109.163:40574). May 14 00:01:42.250485 sshd[5822]: Accepted publickey for core from 147.75.109.163 port 40574 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:42.253818 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:42.262841 systemd-logind[1897]: New session 23 of user core. May 14 00:01:42.270003 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:01:42.722908 sshd[5824]: Connection closed by 147.75.109.163 port 40574 May 14 00:01:42.724020 sshd-session[5822]: pam_unix(sshd:session): session closed for user core May 14 00:01:42.728372 systemd[1]: sshd@22-172.31.19.86:22-147.75.109.163:40574.service: Deactivated successfully. May 14 00:01:42.731373 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:01:42.732711 systemd-logind[1897]: Session 23 logged out. Waiting for processes to exit. May 14 00:01:42.733718 systemd-logind[1897]: Removed session 23. May 14 00:01:46.908547 kubelet[3512]: I0514 00:01:46.904631 3512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:47.760588 systemd[1]: Started sshd@23-172.31.19.86:22-147.75.109.163:40578.service - OpenSSH per-connection server daemon (147.75.109.163:40578). May 14 00:01:47.953849 sshd[5848]: Accepted publickey for core from 147.75.109.163 port 40578 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:47.955917 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:47.980658 systemd-logind[1897]: New session 24 of user core. May 14 00:01:47.983995 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:01:48.528241 sshd[5850]: Connection closed by 147.75.109.163 port 40578 May 14 00:01:48.529305 sshd-session[5848]: pam_unix(sshd:session): session closed for user core May 14 00:01:48.533293 systemd[1]: sshd@23-172.31.19.86:22-147.75.109.163:40578.service: Deactivated successfully. May 14 00:01:48.536419 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:01:48.537301 systemd-logind[1897]: Session 24 logged out. Waiting for processes to exit. May 14 00:01:48.538620 systemd-logind[1897]: Removed session 24. May 14 00:01:50.947606 containerd[1922]: time="2025-05-14T00:01:50.947513985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" id:\"235045a47a9982bdb984be35b4c46ae123bb03b5f18a9f060863714756bfef58\" pid:5876 exited_at:{seconds:1747180910 nanos:947192651}" May 14 00:01:53.565582 systemd[1]: Started sshd@24-172.31.19.86:22-147.75.109.163:50870.service - OpenSSH per-connection server daemon (147.75.109.163:50870). May 14 00:01:53.856344 sshd[5886]: Accepted publickey for core from 147.75.109.163 port 50870 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:53.858442 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:53.866785 systemd-logind[1897]: New session 25 of user core. May 14 00:01:53.872147 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:01:54.185953 sshd[5888]: Connection closed by 147.75.109.163 port 50870 May 14 00:01:54.187745 sshd-session[5886]: pam_unix(sshd:session): session closed for user core May 14 00:01:54.193318 systemd-logind[1897]: Session 25 logged out. Waiting for processes to exit. May 14 00:01:54.193658 systemd[1]: sshd@24-172.31.19.86:22-147.75.109.163:50870.service: Deactivated successfully. May 14 00:01:54.196513 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:01:54.198053 systemd-logind[1897]: Removed session 25. May 14 00:01:59.220495 systemd[1]: Started sshd@25-172.31.19.86:22-147.75.109.163:40138.service - OpenSSH per-connection server daemon (147.75.109.163:40138). May 14 00:01:59.396714 sshd[5900]: Accepted publickey for core from 147.75.109.163 port 40138 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:01:59.398625 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:59.403856 systemd-logind[1897]: New session 26 of user core. May 14 00:01:59.412038 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:01:59.620875 sshd[5902]: Connection closed by 147.75.109.163 port 40138 May 14 00:01:59.622015 sshd-session[5900]: pam_unix(sshd:session): session closed for user core May 14 00:01:59.626591 systemd-logind[1897]: Session 26 logged out. Waiting for processes to exit. May 14 00:01:59.627404 systemd[1]: sshd@25-172.31.19.86:22-147.75.109.163:40138.service: Deactivated successfully. May 14 00:01:59.630411 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:01:59.631742 systemd-logind[1897]: Removed session 26. May 14 00:02:01.614394 containerd[1922]: time="2025-05-14T00:02:01.614342992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\" id:\"1156799d430d6685f6bab6b3b6a15bf7041f778265fb9e8542de35abdf3b8d4b\" pid:5926 exited_at:{seconds:1747180921 nanos:613301483}" May 14 00:02:04.654361 systemd[1]: Started sshd@26-172.31.19.86:22-147.75.109.163:40150.service - OpenSSH per-connection server daemon (147.75.109.163:40150). May 14 00:02:04.872695 sshd[5941]: Accepted publickey for core from 147.75.109.163 port 40150 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:02:04.905723 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:04.911151 systemd-logind[1897]: New session 27 of user core. May 14 00:02:04.915989 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:02:05.511519 sshd[5945]: Connection closed by 147.75.109.163 port 40150 May 14 00:02:05.512106 sshd-session[5941]: pam_unix(sshd:session): session closed for user core May 14 00:02:05.516873 systemd[1]: sshd@26-172.31.19.86:22-147.75.109.163:40150.service: Deactivated successfully. May 14 00:02:05.520275 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:02:05.522096 systemd-logind[1897]: Session 27 logged out. Waiting for processes to exit. May 14 00:02:05.523617 systemd-logind[1897]: Removed session 27. May 14 00:02:10.545505 systemd[1]: Started sshd@27-172.31.19.86:22-147.75.109.163:33296.service - OpenSSH per-connection server daemon (147.75.109.163:33296). May 14 00:02:10.759867 sshd[5958]: Accepted publickey for core from 147.75.109.163 port 33296 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 14 00:02:10.760973 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:10.766843 systemd-logind[1897]: New session 28 of user core. May 14 00:02:10.772016 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:02:11.050979 sshd[5960]: Connection closed by 147.75.109.163 port 33296 May 14 00:02:11.051505 sshd-session[5958]: pam_unix(sshd:session): session closed for user core May 14 00:02:11.057744 systemd[1]: sshd@27-172.31.19.86:22-147.75.109.163:33296.service: Deactivated successfully. May 14 00:02:11.060502 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:02:11.064593 systemd-logind[1897]: Session 28 logged out. Waiting for processes to exit. May 14 00:02:11.067526 systemd-logind[1897]: Removed session 28. May 14 00:02:20.850164 containerd[1922]: time="2025-05-14T00:02:20.824169845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" id:\"5b1d5757c17c2963fd3f3ddd008e09b5411dd386f131ffbcb890353878978456\" pid:5986 exited_at:{seconds:1747180940 nanos:823457947}" May 14 00:02:28.084745 systemd[1]: cri-containerd-950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5.scope: Deactivated successfully. May 14 00:02:28.085172 systemd[1]: cri-containerd-950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5.scope: Consumed 3.759s CPU time, 56.5M memory peak, 32.1M read from disk. May 14 00:02:28.098705 containerd[1922]: time="2025-05-14T00:02:28.098647172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\" id:\"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\" pid:3854 exit_status:1 exited_at:{seconds:1747180948 nanos:98106105}" May 14 00:02:28.100395 containerd[1922]: time="2025-05-14T00:02:28.100336673Z" level=info msg="received exit event container_id:\"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\" id:\"950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5\" pid:3854 exit_status:1 exited_at:{seconds:1747180948 nanos:98106105}" May 14 00:02:28.208553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5-rootfs.mount: Deactivated successfully. May 14 00:02:28.653720 systemd[1]: cri-containerd-ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22.scope: Deactivated successfully. May 14 00:02:28.654138 systemd[1]: cri-containerd-ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22.scope: Consumed 3.518s CPU time, 85.4M memory peak, 66.4M read from disk. May 14 00:02:28.659420 containerd[1922]: time="2025-05-14T00:02:28.658852640Z" level=info msg="received exit event container_id:\"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\" id:\"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\" pid:3176 exit_status:1 exited_at:{seconds:1747180948 nanos:653450946}" May 14 00:02:28.662703 containerd[1922]: time="2025-05-14T00:02:28.661534257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\" id:\"ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22\" pid:3176 exit_status:1 exited_at:{seconds:1747180948 nanos:653450946}" May 14 00:02:28.701976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22-rootfs.mount: Deactivated successfully. May 14 00:02:29.036151 kubelet[3512]: I0514 00:02:29.035070 3512 scope.go:117] "RemoveContainer" containerID="ac9fd17b9d003ef13350c0d8ecf013d32ce4fc7eb3a25389b08a1333face6c22" May 14 00:02:29.036151 kubelet[3512]: I0514 00:02:29.035379 3512 scope.go:117] "RemoveContainer" containerID="950e90499f745f80669772bbe917d49d5f58a749b29b171552f950ecca6233f5" May 14 00:02:29.061816 containerd[1922]: time="2025-05-14T00:02:29.060674608Z" level=info msg="CreateContainer within sandbox \"7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 00:02:29.061816 containerd[1922]: time="2025-05-14T00:02:29.061632519Z" level=info msg="CreateContainer within sandbox \"fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 14 00:02:29.149938 containerd[1922]: time="2025-05-14T00:02:29.149891466Z" level=info msg="Container 45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:29.155656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299618013.mount: Deactivated successfully. May 14 00:02:29.169157 containerd[1922]: time="2025-05-14T00:02:29.168396678Z" level=info msg="Container fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:29.207200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975203366.mount: Deactivated successfully. May 14 00:02:29.229419 containerd[1922]: time="2025-05-14T00:02:29.229371302Z" level=info msg="CreateContainer within sandbox \"fb0168c2850c01036e0a1f1c5980ad28455f7ab89b069ed929781d52022a23ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576\"" May 14 00:02:29.232753 containerd[1922]: time="2025-05-14T00:02:29.232711361Z" level=info msg="StartContainer for \"fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576\"" May 14 00:02:29.245520 containerd[1922]: time="2025-05-14T00:02:29.241503256Z" level=info msg="connecting to shim fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576" address="unix:///run/containerd/s/6e08a748bb191fa4973989f46cef1ad9d62c9770777e80ffab0146fa939f2bdf" protocol=ttrpc version=3 May 14 00:02:29.283474 systemd[1]: Started cri-containerd-fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576.scope - libcontainer container fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576. May 14 00:02:29.285065 containerd[1922]: time="2025-05-14T00:02:29.285023694Z" level=info msg="CreateContainer within sandbox \"7e02b4deaaf9b29a3f9bc0b213cab7a8a6250114a96ffc609e8e76c4c1b0e868\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd\"" May 14 00:02:29.287533 containerd[1922]: time="2025-05-14T00:02:29.286266219Z" level=info msg="StartContainer for \"45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd\"" May 14 00:02:29.289046 containerd[1922]: time="2025-05-14T00:02:29.288433893Z" level=info msg="connecting to shim 45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd" address="unix:///run/containerd/s/c178f6f79d019699ed338ed44cf0ba8f06a8f4c103edeb865855ca407b62b128" protocol=ttrpc version=3 May 14 00:02:29.347827 systemd[1]: Started cri-containerd-45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd.scope - libcontainer container 45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd. May 14 00:02:29.401998 containerd[1922]: time="2025-05-14T00:02:29.401858733Z" level=info msg="StartContainer for \"fcfefcbc3e5604a4d5c18575e34c71938566281fea65123a83fb98459b183576\" returns successfully" May 14 00:02:29.445812 containerd[1922]: time="2025-05-14T00:02:29.445739169Z" level=info msg="StartContainer for \"45e9cd22e4707333e788e4cdb867400a8037f9242f80ea1fa76f7442edb010bd\" returns successfully" May 14 00:02:31.376600 containerd[1922]: time="2025-05-14T00:02:31.376558769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30ec80bfe2908ffa0af894674dc76ee7ad580fa47466629ab27552c8b8caaa68\" id:\"78f6514bdbd7d0b5a3b0b45ac34889e544060c133cce1672a5791e192f9e373c\" pid:6099 exited_at:{seconds:1747180951 nanos:376026974}" May 14 00:02:32.158177 kubelet[3512]: E0514 00:02:32.156149 3512 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) May 14 00:02:32.163201 kubelet[3512]: E0514 00:02:32.163141 3512 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" May 14 00:02:33.187221 systemd[1]: cri-containerd-8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609.scope: Deactivated successfully. May 14 00:02:33.188823 systemd[1]: cri-containerd-8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609.scope: Consumed 1.694s CPU time, 39.1M memory peak, 36.1M read from disk. May 14 00:02:33.191512 containerd[1922]: time="2025-05-14T00:02:33.191457635Z" level=info msg="received exit event container_id:\"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\" id:\"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\" pid:3169 exit_status:1 exited_at:{seconds:1747180953 nanos:191021615}" May 14 00:02:33.193303 containerd[1922]: time="2025-05-14T00:02:33.192608990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\" id:\"8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609\" pid:3169 exit_status:1 exited_at:{seconds:1747180953 nanos:191021615}" May 14 00:02:33.222790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609-rootfs.mount: Deactivated successfully. May 14 00:02:33.655282 containerd[1922]: time="2025-05-14T00:02:33.655187937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88b4681adc5dd0d12a5eab951a3a60375622e7d2ea21cd50d26b48404554f29\" id:\"c283581fe57f71557369b5509da043c08e5eadde0220edd526aa9ead6b998156\" pid:6135 exit_status:1 exited_at:{seconds:1747180953 nanos:654921338}" May 14 00:02:34.092059 kubelet[3512]: I0514 00:02:34.091914 3512 scope.go:117] "RemoveContainer" containerID="8773eebddd18aee63e30c2458da244b3a91e181915f3b4177e5ab016a81f4609" May 14 00:02:34.096517 containerd[1922]: time="2025-05-14T00:02:34.096471579Z" level=info msg="CreateContainer within sandbox \"f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 14 00:02:34.116874 containerd[1922]: time="2025-05-14T00:02:34.113081804Z" level=info msg="Container f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:34.130561 containerd[1922]: time="2025-05-14T00:02:34.130506666Z" level=info msg="CreateContainer within sandbox \"f9488da0b849d01cab1e89cc3d8ccaecf2c292a9097ecd8779c5e3606e6f6329\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6\"" May 14 00:02:34.131245 containerd[1922]: time="2025-05-14T00:02:34.131212867Z" level=info msg="StartContainer for \"f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6\"" May 14 00:02:34.132580 containerd[1922]: time="2025-05-14T00:02:34.132535759Z" level=info msg="connecting to shim f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6" address="unix:///run/containerd/s/afb128c8cba8e9a0e5afb4594b9ecd7e567646b244365f8966b3e6c2edf5504e" protocol=ttrpc version=3 May 14 00:02:34.167027 systemd[1]: Started cri-containerd-f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6.scope - libcontainer container f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6. May 14 00:02:34.246950 containerd[1922]: time="2025-05-14T00:02:34.246826577Z" level=info msg="StartContainer for \"f22c345bfd6e7a0325e5d3ae326db72cfafc42fa2f66a73d87388b7ff86153f6\" returns successfully" May 14 00:02:42.168836 kubelet[3512]: E0514 00:02:42.168724 3512 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-86?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"