May 8 00:12:21.915858 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:12:21.915897 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:12:21.915917 kernel: BIOS-provided physical RAM map: May 8 00:12:21.915930 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:12:21.915942 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 8 00:12:21.915955 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 8 00:12:21.915971 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 8 00:12:21.915985 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 8 00:12:21.915998 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 8 00:12:21.916010 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 8 00:12:21.916026 kernel: NX (Execute Disable) protection: active May 8 00:12:21.916049 kernel: APIC: Static calls initialized May 8 00:12:21.916063 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 8 00:12:21.916077 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 8 00:12:21.916094 kernel: extended physical RAM map: May 8 00:12:21.916108 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:12:21.916126 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 8 00:12:21.916141 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 8 00:12:21.916156 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 8 00:12:21.916171 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 8 00:12:21.916185 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 8 00:12:21.916200 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 8 00:12:21.916215 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 8 00:12:21.916229 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 8 00:12:21.916244 kernel: efi: EFI v2.7 by EDK II May 8 00:12:21.916258 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 8 00:12:21.916276 kernel: secureboot: Secure boot disabled May 8 00:12:21.916290 kernel: SMBIOS 2.7 present. May 8 00:12:21.916304 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 8 00:12:21.916319 kernel: Hypervisor detected: KVM May 8 00:12:21.916333 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:12:21.916348 kernel: kvm-clock: using sched offset of 4170618381 cycles May 8 00:12:21.916363 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:12:21.916378 kernel: tsc: Detected 2499.998 MHz processor May 8 00:12:21.916393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:12:21.916408 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:12:21.916423 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 8 00:12:21.916441 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:12:21.916453 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:12:21.916467 kernel: Using GB pages for direct mapping May 8 00:12:21.916487 kernel: ACPI: Early table checksum verification disabled May 8 00:12:21.916502 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 8 00:12:21.916514 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 8 00:12:21.916538 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 8 00:12:21.916557 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 8 00:12:21.916577 kernel: ACPI: FACS 0x00000000789D0000 000040 May 8 00:12:21.916595 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 8 00:12:21.916614 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 8 00:12:21.916627 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 8 00:12:21.916642 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 8 00:12:21.916684 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 8 00:12:21.916701 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 8 00:12:21.916716 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 8 00:12:21.916732 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 8 00:12:21.916747 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 8 00:12:21.916762 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 8 00:12:21.916777 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 8 00:12:21.916792 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 8 00:12:21.916807 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 8 00:12:21.916826 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 8 00:12:21.916841 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 8 00:12:21.916856 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 8 00:12:21.916871 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 8 00:12:21.916887 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 8 00:12:21.916902 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 8 00:12:21.916917 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:12:21.916932 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 8 00:12:21.916947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 8 00:12:21.916965 kernel: NUMA: Initialized distance table, cnt=1 May 8 00:12:21.916980 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 8 00:12:21.916995 kernel: Zone ranges: May 8 00:12:21.917010 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:12:21.917025 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 8 00:12:21.917040 kernel: Normal empty May 8 00:12:21.917055 kernel: Movable zone start for each node May 8 00:12:21.917070 kernel: Early memory node ranges May 8 00:12:21.917085 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:12:21.917100 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 8 00:12:21.917118 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 8 00:12:21.917133 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 8 00:12:21.917148 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:12:21.917163 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:12:21.917178 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:12:21.917193 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 8 00:12:21.917208 kernel: ACPI: PM-Timer IO Port: 0xb008 May 8 00:12:21.917223 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:12:21.917238 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 8 00:12:21.917257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:12:21.917272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:12:21.917287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:12:21.917302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:12:21.917318 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:12:21.917333 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:12:21.917348 kernel: TSC deadline timer available May 8 00:12:21.917363 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:12:21.917378 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:12:21.917396 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 8 00:12:21.917411 kernel: Booting paravirtualized kernel on KVM May 8 00:12:21.917426 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:12:21.917442 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:12:21.917457 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:12:21.917473 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:12:21.917488 kernel: pcpu-alloc: [0] 0 1 May 8 00:12:21.917503 kernel: kvm-guest: PV spinlocks enabled May 8 00:12:21.917518 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:12:21.917540 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:12:21.917556 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:12:21.917570 kernel: random: crng init done May 8 00:12:21.917586 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:12:21.917601 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:12:21.917616 kernel: Fallback order for Node 0: 0 May 8 00:12:21.917631 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 8 00:12:21.917646 kernel: Policy zone: DMA32 May 8 00:12:21.917688 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:12:21.917704 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 165012K reserved, 0K cma-reserved) May 8 00:12:21.917719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:12:21.917735 kernel: Kernel/User page tables isolation: enabled May 8 00:12:21.917750 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:12:21.917778 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:12:21.917797 kernel: Dynamic Preempt: voluntary May 8 00:12:21.917813 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:12:21.917830 kernel: rcu: RCU event tracing is enabled. May 8 00:12:21.917847 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:12:21.917862 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:12:21.917879 kernel: Rude variant of Tasks RCU enabled. May 8 00:12:21.917898 kernel: Tracing variant of Tasks RCU enabled. May 8 00:12:21.917914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:12:21.917930 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:12:21.917946 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:12:21.917962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:12:21.917982 kernel: Console: colour dummy device 80x25 May 8 00:12:21.917998 kernel: printk: console [tty0] enabled May 8 00:12:21.918014 kernel: printk: console [ttyS0] enabled May 8 00:12:21.918029 kernel: ACPI: Core revision 20230628 May 8 00:12:21.918046 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 8 00:12:21.918062 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:12:21.918078 kernel: x2apic enabled May 8 00:12:21.918094 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:12:21.918110 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 8 00:12:21.918129 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 8 00:12:21.918146 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:12:21.918161 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:12:21.918177 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:12:21.918193 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:12:21.918209 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:12:21.918224 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:12:21.918241 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 8 00:12:21.918256 kernel: RETBleed: Vulnerable May 8 00:12:21.918272 kernel: Speculative Store Bypass: Vulnerable May 8 00:12:21.918291 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:12:21.918306 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:12:21.918322 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:12:21.918338 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:12:21.918354 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:12:21.918370 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:12:21.918386 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 8 00:12:21.918401 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 8 00:12:21.918417 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 8 00:12:21.918433 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 8 00:12:21.918449 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 8 00:12:21.918468 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 8 00:12:21.918484 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:12:21.918500 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 8 00:12:21.918516 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 8 00:12:21.918532 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 8 00:12:21.918548 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 8 00:12:21.918563 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 8 00:12:21.918579 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 8 00:12:21.918595 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 8 00:12:21.918611 kernel: Freeing SMP alternatives memory: 32K May 8 00:12:21.918626 kernel: pid_max: default: 32768 minimum: 301 May 8 00:12:21.918645 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:12:21.918678 kernel: landlock: Up and running. May 8 00:12:21.918695 kernel: SELinux: Initializing. May 8 00:12:21.918711 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:12:21.918727 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:12:21.918743 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 8 00:12:21.918759 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:12:21.918775 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:12:21.918792 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:12:21.918808 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 8 00:12:21.918828 kernel: signal: max sigframe size: 3632 May 8 00:12:21.918844 kernel: rcu: Hierarchical SRCU implementation. May 8 00:12:21.918860 kernel: rcu: Max phase no-delay instances is 400. May 8 00:12:21.918876 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:12:21.918892 kernel: smp: Bringing up secondary CPUs ... May 8 00:12:21.918908 kernel: smpboot: x86: Booting SMP configuration: May 8 00:12:21.918924 kernel: .... node #0, CPUs: #1 May 8 00:12:21.918940 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 8 00:12:21.918958 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 8 00:12:21.918977 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:12:21.918993 kernel: smpboot: Max logical packages: 1 May 8 00:12:21.919009 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 8 00:12:21.919025 kernel: devtmpfs: initialized May 8 00:12:21.919041 kernel: x86/mm: Memory block size: 128MB May 8 00:12:21.919057 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 8 00:12:21.919073 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:12:21.919090 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:12:21.919106 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:12:21.919125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:12:21.919141 kernel: audit: initializing netlink subsys (disabled) May 8 00:12:21.919158 kernel: audit: type=2000 audit(1746663141.954:1): state=initialized audit_enabled=0 res=1 May 8 00:12:21.919173 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:12:21.919190 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:12:21.919206 kernel: cpuidle: using governor menu May 8 00:12:21.919222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:12:21.919238 kernel: dca service started, version 1.12.1 May 8 00:12:21.919254 kernel: PCI: Using configuration type 1 for base access May 8 00:12:21.919273 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:12:21.919289 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:12:21.919305 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:12:21.919322 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:12:21.919338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:12:21.919354 kernel: ACPI: Added _OSI(Module Device) May 8 00:12:21.919370 kernel: ACPI: Added _OSI(Processor Device) May 8 00:12:21.919386 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:12:21.919402 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:12:21.919422 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 8 00:12:21.919438 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:12:21.919454 kernel: ACPI: Interpreter enabled May 8 00:12:21.919470 kernel: ACPI: PM: (supports S0 S5) May 8 00:12:21.919486 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:12:21.919502 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:12:21.919518 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:12:21.919534 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 8 00:12:21.919550 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:12:21.920942 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 8 00:12:21.921116 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 8 00:12:21.921258 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 8 00:12:21.921277 kernel: acpiphp: Slot [3] registered May 8 00:12:21.921292 kernel: acpiphp: Slot [4] registered May 8 00:12:21.921306 kernel: acpiphp: Slot [5] registered May 8 00:12:21.921321 kernel: acpiphp: Slot [6] registered May 8 00:12:21.921341 kernel: acpiphp: Slot [7] registered May 8 00:12:21.921356 kernel: acpiphp: Slot [8] registered May 8 00:12:21.921371 kernel: acpiphp: Slot [9] registered May 8 00:12:21.921386 kernel: acpiphp: Slot [10] registered May 8 00:12:21.921401 kernel: acpiphp: Slot [11] registered May 8 00:12:21.921415 kernel: acpiphp: Slot [12] registered May 8 00:12:21.921430 kernel: acpiphp: Slot [13] registered May 8 00:12:21.921444 kernel: acpiphp: Slot [14] registered May 8 00:12:21.921459 kernel: acpiphp: Slot [15] registered May 8 00:12:21.921474 kernel: acpiphp: Slot [16] registered May 8 00:12:21.921491 kernel: acpiphp: Slot [17] registered May 8 00:12:21.921506 kernel: acpiphp: Slot [18] registered May 8 00:12:21.921520 kernel: acpiphp: Slot [19] registered May 8 00:12:21.921535 kernel: acpiphp: Slot [20] registered May 8 00:12:21.921549 kernel: acpiphp: Slot [21] registered May 8 00:12:21.921564 kernel: acpiphp: Slot [22] registered May 8 00:12:21.921578 kernel: acpiphp: Slot [23] registered May 8 00:12:21.921593 kernel: acpiphp: Slot [24] registered May 8 00:12:21.921607 kernel: acpiphp: Slot [25] registered May 8 00:12:21.921624 kernel: acpiphp: Slot [26] registered May 8 00:12:21.921639 kernel: acpiphp: Slot [27] registered May 8 00:12:21.921653 kernel: acpiphp: Slot [28] registered May 8 00:12:21.921681 kernel: acpiphp: Slot [29] registered May 8 00:12:21.921696 kernel: acpiphp: Slot [30] registered May 8 00:12:21.921710 kernel: acpiphp: Slot [31] registered May 8 00:12:21.921725 kernel: PCI host bridge to bus 0000:00 May 8 00:12:21.921863 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:12:21.921985 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:12:21.922106 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:12:21.922235 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 8 00:12:21.922351 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 8 00:12:21.922468 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:12:21.922624 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 8 00:12:21.925715 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 8 00:12:21.925887 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 8 00:12:21.926024 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 8 00:12:21.926157 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 8 00:12:21.926286 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 8 00:12:21.926428 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 8 00:12:21.926560 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 8 00:12:21.927846 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 8 00:12:21.928018 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 8 00:12:21.928169 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 8 00:12:21.928300 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 8 00:12:21.928428 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:12:21.928555 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 8 00:12:21.929761 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:12:21.929977 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 8 00:12:21.930129 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 8 00:12:21.930274 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 8 00:12:21.930413 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 8 00:12:21.930435 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:12:21.930453 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:12:21.930469 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:12:21.930486 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:12:21.930507 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 8 00:12:21.930523 kernel: iommu: Default domain type: Translated May 8 00:12:21.930540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:12:21.930557 kernel: efivars: Registered efivars operations May 8 00:12:21.930573 kernel: PCI: Using ACPI for IRQ routing May 8 00:12:21.930590 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:12:21.930607 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 8 00:12:21.930623 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 8 00:12:21.930640 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 8 00:12:21.930797 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 8 00:12:21.930936 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 8 00:12:21.931074 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:12:21.931095 kernel: vgaarb: loaded May 8 00:12:21.931113 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 8 00:12:21.931130 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 8 00:12:21.931146 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:12:21.931163 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:12:21.931180 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:12:21.931201 kernel: pnp: PnP ACPI init May 8 00:12:21.931217 kernel: pnp: PnP ACPI: found 5 devices May 8 00:12:21.931234 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:12:21.931251 kernel: NET: Registered PF_INET protocol family May 8 00:12:21.931268 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:12:21.931285 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:12:21.931302 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:12:21.931319 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:12:21.931339 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:12:21.931356 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:12:21.931373 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:12:21.931390 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:12:21.931407 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:12:21.931423 kernel: NET: Registered PF_XDP protocol family May 8 00:12:21.931550 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:12:21.933816 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:12:21.933979 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:12:21.934115 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 8 00:12:21.934255 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 8 00:12:21.934423 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:12:21.934449 kernel: PCI: CLS 0 bytes, default 64 May 8 00:12:21.934468 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:12:21.934486 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 8 00:12:21.934503 kernel: clocksource: Switched to clocksource tsc May 8 00:12:21.934521 kernel: Initialise system trusted keyrings May 8 00:12:21.934546 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:12:21.934564 kernel: Key type asymmetric registered May 8 00:12:21.934581 kernel: Asymmetric key parser 'x509' registered May 8 00:12:21.934596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:12:21.934612 kernel: io scheduler mq-deadline registered May 8 00:12:21.934628 kernel: io scheduler kyber registered May 8 00:12:21.934643 kernel: io scheduler bfq registered May 8 00:12:21.934680 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:12:21.935726 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:12:21.935749 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:12:21.935766 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:12:21.935782 kernel: i8042: Warning: Keylock active May 8 00:12:21.935797 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:12:21.935813 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:12:21.935990 kernel: rtc_cmos 00:00: RTC can wake from S4 May 8 00:12:21.936214 kernel: rtc_cmos 00:00: registered as rtc0 May 8 00:12:21.936342 kernel: rtc_cmos 00:00: setting system clock to 2025-05-08T00:12:21 UTC (1746663141) May 8 00:12:21.936472 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 8 00:12:21.936492 kernel: intel_pstate: CPU model not supported May 8 00:12:21.936509 kernel: efifb: probing for efifb May 8 00:12:21.936525 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 8 00:12:21.936566 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 8 00:12:21.936586 kernel: efifb: scrolling: redraw May 8 00:12:21.936603 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:12:21.936620 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:12:21.936637 kernel: fb0: EFI VGA frame buffer device May 8 00:12:21.937708 kernel: pstore: Using crash dump compression: deflate May 8 00:12:21.937733 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:12:21.937749 kernel: NET: Registered PF_INET6 protocol family May 8 00:12:21.937764 kernel: Segment Routing with IPv6 May 8 00:12:21.937778 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:12:21.937794 kernel: NET: Registered PF_PACKET protocol family May 8 00:12:21.937810 kernel: Key type dns_resolver registered May 8 00:12:21.937827 kernel: IPI shorthand broadcast: enabled May 8 00:12:21.937843 kernel: sched_clock: Marking stable (467003116, 132047975)->(669659635, -70608544) May 8 00:12:21.937863 kernel: registered taskstats version 1 May 8 00:12:21.937877 kernel: Loading compiled-in X.509 certificates May 8 00:12:21.937893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:12:21.937910 kernel: Key type .fscrypt registered May 8 00:12:21.937927 kernel: Key type fscrypt-provisioning registered May 8 00:12:21.937944 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:12:21.937962 kernel: ima: Allocated hash algorithm: sha1 May 8 00:12:21.937979 kernel: ima: No architecture policies found May 8 00:12:21.937996 kernel: clk: Disabling unused clocks May 8 00:12:21.938016 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:12:21.938034 kernel: Write protecting the kernel read-only data: 38912k May 8 00:12:21.938052 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:12:21.938070 kernel: Run /init as init process May 8 00:12:21.938087 kernel: with arguments: May 8 00:12:21.938104 kernel: /init May 8 00:12:21.938120 kernel: with environment: May 8 00:12:21.938138 kernel: HOME=/ May 8 00:12:21.938154 kernel: TERM=linux May 8 00:12:21.938174 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:12:21.938194 systemd[1]: Successfully made /usr/ read-only. May 8 00:12:21.938216 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:12:21.938235 systemd[1]: Detected virtualization amazon. May 8 00:12:21.938253 systemd[1]: Detected architecture x86-64. May 8 00:12:21.938273 systemd[1]: Running in initrd. May 8 00:12:21.938290 systemd[1]: No hostname configured, using default hostname. May 8 00:12:21.938309 systemd[1]: Hostname set to . May 8 00:12:21.938327 systemd[1]: Initializing machine ID from VM UUID. May 8 00:12:21.938344 systemd[1]: Queued start job for default target initrd.target. May 8 00:12:21.938362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:21.938380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:21.938402 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:12:21.938420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:12:21.938438 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:12:21.938457 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:12:21.938477 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:12:21.938495 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:12:21.938513 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:21.938534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:21.938553 systemd[1]: Reached target paths.target - Path Units. May 8 00:12:21.938571 systemd[1]: Reached target slices.target - Slice Units. May 8 00:12:21.938589 systemd[1]: Reached target swap.target - Swaps. May 8 00:12:21.938608 systemd[1]: Reached target timers.target - Timer Units. May 8 00:12:21.938626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:12:21.938644 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:12:21.938679 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:12:21.938700 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:12:21.938719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:21.938737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:12:21.938755 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:21.938774 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:12:21.938792 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:12:21.938811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:12:21.938832 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:12:21.938851 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:12:21.938873 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:12:21.938892 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:12:21.938910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:21.938929 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:12:21.940846 systemd-journald[179]: Collecting audit messages is disabled. May 8 00:12:21.940900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:21.940919 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:12:21.940937 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:12:21.940959 systemd-journald[179]: Journal started May 8 00:12:21.940993 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2a8ed03e11c9e6c8219d3f17220a30) is 4.7M, max 38.1M, 33.4M free. May 8 00:12:21.935014 systemd-modules-load[180]: Inserted module 'overlay' May 8 00:12:21.949684 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:12:21.955789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:21.965934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:21.970855 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:12:21.971782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:12:21.983908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:12:22.000792 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:12:22.000839 kernel: Bridge firewalling registered May 8 00:12:21.992365 systemd-modules-load[180]: Inserted module 'br_netfilter' May 8 00:12:21.994026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:12:22.003251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:22.006016 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:22.010035 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:12:22.020004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:12:22.023817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:12:22.024965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:22.036469 dracut-cmdline[210]: dracut-dracut-053 May 8 00:12:22.041234 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:12:22.046460 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:22.052939 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:12:22.106313 systemd-resolved[231]: Positive Trust Anchors: May 8 00:12:22.106334 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:12:22.106404 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:12:22.113117 systemd-resolved[231]: Defaulting to hostname 'linux'. May 8 00:12:22.116722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:12:22.118750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:22.138703 kernel: SCSI subsystem initialized May 8 00:12:22.149693 kernel: Loading iSCSI transport class v2.0-870. May 8 00:12:22.160687 kernel: iscsi: registered transport (tcp) May 8 00:12:22.183004 kernel: iscsi: registered transport (qla4xxx) May 8 00:12:22.183103 kernel: QLogic iSCSI HBA Driver May 8 00:12:22.221969 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:12:22.226920 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:12:22.254033 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:12:22.254111 kernel: device-mapper: uevent: version 1.0.3 May 8 00:12:22.254134 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:12:22.296693 kernel: raid6: avx512x4 gen() 18071 MB/s May 8 00:12:22.314684 kernel: raid6: avx512x2 gen() 17866 MB/s May 8 00:12:22.331688 kernel: raid6: avx512x1 gen() 17819 MB/s May 8 00:12:22.348687 kernel: raid6: avx2x4 gen() 17749 MB/s May 8 00:12:22.365689 kernel: raid6: avx2x2 gen() 17756 MB/s May 8 00:12:22.382903 kernel: raid6: avx2x1 gen() 13560 MB/s May 8 00:12:22.382950 kernel: raid6: using algorithm avx512x4 gen() 18071 MB/s May 8 00:12:22.402693 kernel: raid6: .... xor() 7787 MB/s, rmw enabled May 8 00:12:22.402749 kernel: raid6: using avx512x2 recovery algorithm May 8 00:12:22.423692 kernel: xor: automatically using best checksumming function avx May 8 00:12:22.578695 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:12:22.588979 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:12:22.599905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:22.614479 systemd-udevd[399]: Using default interface naming scheme 'v255'. May 8 00:12:22.620491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:22.628864 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:12:22.647314 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation May 8 00:12:22.676324 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:12:22.687910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:12:22.739837 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:22.746863 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:12:22.774743 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:12:22.776635 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:12:22.778746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:22.780178 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:12:22.787770 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:12:22.807966 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:12:22.848117 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 8 00:12:22.884443 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 8 00:12:22.884646 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:12:22.884681 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 8 00:12:22.884841 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:bd:5d:f5:79:8b May 8 00:12:22.885002 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:12:22.878628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:12:22.889006 kernel: AES CTR mode by8 optimization enabled May 8 00:12:22.878887 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:22.885273 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:22.885925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:12:22.886235 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:22.888858 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:22.896132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:22.903512 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:12:22.908136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:12:22.909137 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. May 8 00:12:22.914768 kernel: nvme nvme0: pci function 0000:00:04.0 May 8 00:12:22.910581 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:22.919266 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 8 00:12:22.922876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:22.940206 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 8 00:12:22.941528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:22.949031 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:12:22.949093 kernel: GPT:9289727 != 16777215 May 8 00:12:22.949112 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:12:22.949130 kernel: GPT:9289727 != 16777215 May 8 00:12:22.949154 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:12:22.949172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:12:22.948905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:22.974190 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:23.065279 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (450) May 8 00:12:23.074719 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (459) May 8 00:12:23.113444 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 8 00:12:23.153559 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 8 00:12:23.154187 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 8 00:12:23.165974 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 8 00:12:23.184623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 8 00:12:23.196918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:12:23.204276 disk-uuid[630]: Primary Header is updated. May 8 00:12:23.204276 disk-uuid[630]: Secondary Entries is updated. May 8 00:12:23.204276 disk-uuid[630]: Secondary Header is updated. May 8 00:12:23.210617 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:12:23.223707 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:12:24.221280 disk-uuid[631]: The operation has completed successfully. May 8 00:12:24.221986 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:12:24.333963 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:12:24.334065 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:12:24.369864 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:12:24.374365 sh[889]: Success May 8 00:12:24.394879 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:12:24.502139 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:12:24.515792 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:12:24.517253 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:12:24.543530 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:12:24.543600 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:12:24.543622 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:12:24.545812 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:12:24.547832 kernel: BTRFS info (device dm-0): using free space tree May 8 00:12:24.668705 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:12:24.691281 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:12:24.692419 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:12:24.702874 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:12:24.705808 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:12:24.733349 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:12:24.733415 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 8 00:12:24.735230 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:12:24.741772 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:12:24.746805 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:12:24.749483 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:12:24.755837 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:12:24.797730 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:12:24.802878 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:12:24.828712 systemd-networkd[1078]: lo: Link UP May 8 00:12:24.828724 systemd-networkd[1078]: lo: Gained carrier May 8 00:12:24.830046 systemd-networkd[1078]: Enumeration completed May 8 00:12:24.830160 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:12:24.830681 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:24.830686 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:12:24.831053 systemd[1]: Reached target network.target - Network. May 8 00:12:24.833649 systemd-networkd[1078]: eth0: Link UP May 8 00:12:24.833762 systemd-networkd[1078]: eth0: Gained carrier May 8 00:12:24.833776 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:24.843770 systemd-networkd[1078]: eth0: DHCPv4 address 172.31.16.158/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 8 00:12:25.104564 ignition[1015]: Ignition 2.20.0 May 8 00:12:25.104579 ignition[1015]: Stage: fetch-offline May 8 00:12:25.104851 ignition[1015]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:25.104865 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:25.105292 ignition[1015]: Ignition finished successfully May 8 00:12:25.107698 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:12:25.111873 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:12:25.127564 ignition[1088]: Ignition 2.20.0 May 8 00:12:25.127579 ignition[1088]: Stage: fetch May 8 00:12:25.128143 ignition[1088]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:25.128161 ignition[1088]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:25.128290 ignition[1088]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:25.137020 ignition[1088]: PUT result: OK May 8 00:12:25.138685 ignition[1088]: parsed url from cmdline: "" May 8 00:12:25.138706 ignition[1088]: no config URL provided May 8 00:12:25.138727 ignition[1088]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:12:25.138745 ignition[1088]: no config at "/usr/lib/ignition/user.ign" May 8 00:12:25.138778 ignition[1088]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:25.139339 ignition[1088]: PUT result: OK May 8 00:12:25.139402 ignition[1088]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 8 00:12:25.139953 ignition[1088]: GET result: OK May 8 00:12:25.140203 ignition[1088]: parsing config with SHA512: 338d716a6cde811f16bb09e3ced94277a4831f12cd2feffe3f9e19497a16b67cda544d39fc9670c4e9b84c4b9b14c85261657174410e782a75ee84e5e59799d6 May 8 00:12:25.145782 unknown[1088]: fetched base config from "system" May 8 00:12:25.145794 unknown[1088]: fetched base config from "system" May 8 00:12:25.146444 ignition[1088]: fetch: fetch complete May 8 00:12:25.145802 unknown[1088]: fetched user config from "aws" May 8 00:12:25.146455 ignition[1088]: fetch: fetch passed May 8 00:12:25.146515 ignition[1088]: Ignition finished successfully May 8 00:12:25.148697 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:12:25.153884 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:12:25.169937 ignition[1094]: Ignition 2.20.0 May 8 00:12:25.169951 ignition[1094]: Stage: kargs May 8 00:12:25.170358 ignition[1094]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:25.170373 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:25.170491 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:25.171339 ignition[1094]: PUT result: OK May 8 00:12:25.173945 ignition[1094]: kargs: kargs passed May 8 00:12:25.174022 ignition[1094]: Ignition finished successfully May 8 00:12:25.175718 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:12:25.181345 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:12:25.194650 ignition[1100]: Ignition 2.20.0 May 8 00:12:25.194684 ignition[1100]: Stage: disks May 8 00:12:25.195112 ignition[1100]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:25.195126 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:25.195264 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:25.196210 ignition[1100]: PUT result: OK May 8 00:12:25.198782 ignition[1100]: disks: disks passed May 8 00:12:25.198854 ignition[1100]: Ignition finished successfully May 8 00:12:25.200571 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:12:25.201194 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:12:25.201550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:12:25.202102 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:12:25.202625 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:12:25.203190 systemd[1]: Reached target basic.target - Basic System. May 8 00:12:25.215959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:12:25.255931 systemd-fsck[1108]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:12:25.259041 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:12:25.265820 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:12:25.361830 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:12:25.362602 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:12:25.363551 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:12:25.376790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:12:25.379224 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:12:25.380016 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:12:25.380082 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:12:25.380112 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:12:25.385800 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:12:25.387495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:12:25.401701 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1127) May 8 00:12:25.406700 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:12:25.406768 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 8 00:12:25.406782 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:12:25.424688 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:12:25.426830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:12:25.774130 initrd-setup-root[1152]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:12:25.799101 initrd-setup-root[1159]: cut: /sysroot/etc/group: No such file or directory May 8 00:12:25.804278 initrd-setup-root[1166]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:12:25.809528 initrd-setup-root[1173]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:12:26.127029 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:12:26.135895 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:12:26.140958 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:12:26.147565 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:12:26.149712 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:12:26.174529 ignition[1240]: INFO : Ignition 2.20.0 May 8 00:12:26.174529 ignition[1240]: INFO : Stage: mount May 8 00:12:26.176467 ignition[1240]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:12:26.176467 ignition[1240]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:26.176467 ignition[1240]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:26.178748 ignition[1240]: INFO : PUT result: OK May 8 00:12:26.182733 ignition[1240]: INFO : mount: mount passed May 8 00:12:26.183411 ignition[1240]: INFO : Ignition finished successfully May 8 00:12:26.184994 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:12:26.193843 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:12:26.197017 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:12:26.210936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:12:26.231702 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1253) May 8 00:12:26.235686 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:12:26.235753 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 8 00:12:26.235767 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:12:26.242972 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:12:26.244721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:12:26.270762 ignition[1269]: INFO : Ignition 2.20.0 May 8 00:12:26.270762 ignition[1269]: INFO : Stage: files May 8 00:12:26.272230 ignition[1269]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:12:26.272230 ignition[1269]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:26.272230 ignition[1269]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:26.273473 ignition[1269]: INFO : PUT result: OK May 8 00:12:26.275205 ignition[1269]: DEBUG : files: compiled without relabeling support, skipping May 8 00:12:26.289270 ignition[1269]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:12:26.289270 ignition[1269]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:12:26.325133 ignition[1269]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:12:26.325900 ignition[1269]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:12:26.325900 ignition[1269]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:12:26.325728 unknown[1269]: wrote ssh authorized keys file for user: core May 8 00:12:26.340680 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:12:26.341718 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:12:26.449211 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:12:26.615789 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:12:26.616844 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:12:26.626454 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:12:26.749854 systemd-networkd[1078]: eth0: Gained IPv6LL May 8 00:12:26.958145 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:12:27.403364 ignition[1269]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:12:27.404645 ignition[1269]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:12:27.405372 ignition[1269]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:12:27.406160 ignition[1269]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:12:27.406160 ignition[1269]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:12:27.406160 ignition[1269]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 8 00:12:27.406160 ignition[1269]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:12:27.406160 ignition[1269]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:12:27.406160 ignition[1269]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:12:27.406160 ignition[1269]: INFO : files: files passed May 8 00:12:27.406160 ignition[1269]: INFO : Ignition finished successfully May 8 00:12:27.406967 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:12:27.418910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:12:27.421382 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:12:27.424700 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:12:27.424817 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:12:27.434188 initrd-setup-root-after-ignition[1298]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:12:27.434188 initrd-setup-root-after-ignition[1298]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:12:27.436928 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:12:27.437690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:12:27.438306 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:12:27.440883 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:12:27.466853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:12:27.466960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:12:27.468358 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:12:27.469094 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:12:27.469874 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:12:27.471069 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:12:27.487284 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:12:27.492879 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:12:27.502600 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:27.503199 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:27.504143 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:12:27.504882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:12:27.505002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:12:27.505980 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:12:27.506769 systemd[1]: Stopped target basic.target - Basic System. May 8 00:12:27.507453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:12:27.508288 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:12:27.508942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:12:27.509619 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:12:27.510307 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:12:27.511002 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:12:27.511935 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:12:27.512795 systemd[1]: Stopped target swap.target - Swaps. May 8 00:12:27.513431 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:12:27.513555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:12:27.514459 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:27.515178 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:27.515789 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:12:27.515886 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:27.516547 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:12:27.516679 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:12:27.517548 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:12:27.517675 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:12:27.518184 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:12:27.518276 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:12:27.525876 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:12:27.526874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:12:27.527013 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:27.530897 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:12:27.531296 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:12:27.531426 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:27.531918 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:12:27.532029 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:12:27.537463 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:12:27.537547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:12:27.541467 ignition[1322]: INFO : Ignition 2.20.0 May 8 00:12:27.541467 ignition[1322]: INFO : Stage: umount May 8 00:12:27.546038 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:12:27.546038 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 00:12:27.546038 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 00:12:27.546038 ignition[1322]: INFO : PUT result: OK May 8 00:12:27.546038 ignition[1322]: INFO : umount: umount passed May 8 00:12:27.546038 ignition[1322]: INFO : Ignition finished successfully May 8 00:12:27.548517 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:12:27.548608 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:12:27.549425 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:12:27.549502 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:12:27.551059 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:12:27.551111 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:12:27.551407 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:12:27.551448 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:12:27.551827 systemd[1]: Stopped target network.target - Network. May 8 00:12:27.552889 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:12:27.552954 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:12:27.554700 systemd[1]: Stopped target paths.target - Path Units. May 8 00:12:27.554980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:12:27.558733 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:27.559081 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:12:27.559363 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:12:27.559688 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:12:27.559733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:12:27.560145 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:12:27.560180 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:12:27.560460 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:12:27.560511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:12:27.561516 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:12:27.561563 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:12:27.562260 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:12:27.562698 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:12:27.565629 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:12:27.566242 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:12:27.566339 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:12:27.569534 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:12:27.570099 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:12:27.570173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:27.573752 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:12:27.574071 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:12:27.574158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:12:27.576188 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:12:27.576437 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:12:27.576529 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:12:27.577960 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:12:27.578030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:27.578638 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:12:27.578728 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:12:27.586788 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:12:27.587191 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:12:27.587254 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:12:27.587681 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:12:27.587722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:27.588246 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:12:27.588284 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:12:27.588941 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:27.592363 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:12:27.602936 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:12:27.603072 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:12:27.611394 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:12:27.611539 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:27.612510 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:12:27.612552 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:12:27.613014 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:12:27.613048 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:27.614038 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:12:27.614088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:12:27.615104 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:12:27.615145 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:12:27.616318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:12:27.616367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:27.622832 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:12:27.623276 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:12:27.623342 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:27.625465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:12:27.625516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:27.629847 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:12:27.629967 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:12:27.630977 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:12:27.634879 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:12:27.658010 systemd[1]: Switching root. May 8 00:12:27.697128 systemd-journald[179]: Journal stopped May 8 00:12:29.626537 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). May 8 00:12:29.626609 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:12:29.626624 kernel: SELinux: policy capability open_perms=1 May 8 00:12:29.626636 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:12:29.626652 kernel: SELinux: policy capability always_check_network=0 May 8 00:12:29.626679 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:12:29.626691 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:12:29.626707 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:12:29.626719 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:12:29.626734 kernel: audit: type=1403 audit(1746663148.177:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:12:29.626747 systemd[1]: Successfully loaded SELinux policy in 81.457ms. May 8 00:12:29.626768 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.676ms. May 8 00:12:29.626781 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:12:29.626794 systemd[1]: Detected virtualization amazon. May 8 00:12:29.626806 systemd[1]: Detected architecture x86-64. May 8 00:12:29.626819 systemd[1]: Detected first boot. May 8 00:12:29.626838 systemd[1]: Initializing machine ID from VM UUID. May 8 00:12:29.626850 kernel: Guest personality initialized and is inactive May 8 00:12:29.626862 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:12:29.626877 kernel: Initialized host personality May 8 00:12:29.626890 zram_generator::config[1368]: No configuration found. May 8 00:12:29.626904 kernel: NET: Registered PF_VSOCK protocol family May 8 00:12:29.626916 systemd[1]: Populated /etc with preset unit settings. May 8 00:12:29.626929 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:12:29.626943 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:12:29.626955 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:12:29.626967 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:12:29.626979 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:12:29.626992 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:12:29.627006 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:12:29.627018 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:12:29.627031 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:12:29.627043 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:12:29.627058 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:12:29.627071 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:12:29.627083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:29.627095 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:29.627107 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:12:29.627121 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:12:29.627133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:12:29.627146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:12:29.627161 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:12:29.627173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:29.627186 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:12:29.627198 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:12:29.627210 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:12:29.627222 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:12:29.627235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:29.627247 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:12:29.627262 systemd[1]: Reached target slices.target - Slice Units. May 8 00:12:29.627275 systemd[1]: Reached target swap.target - Swaps. May 8 00:12:29.627287 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:12:29.627299 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:12:29.627311 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:12:29.627323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:29.627335 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:12:29.627347 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:29.627358 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:12:29.627370 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:12:29.627385 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:12:29.627397 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:12:29.627409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:29.627421 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:12:29.627433 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:12:29.627445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:12:29.627458 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:12:29.627470 systemd[1]: Reached target machines.target - Containers. May 8 00:12:29.627485 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:12:29.627498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:29.627510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:12:29.627521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:12:29.627534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:29.627546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:12:29.627558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:29.627570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:12:29.627584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:12:29.627597 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:12:29.627609 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:12:29.627622 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:12:29.627634 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:12:29.627646 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:12:29.631077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:12:29.631125 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:12:29.631138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:12:29.631155 kernel: loop: module loaded May 8 00:12:29.631170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:12:29.631182 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:12:29.631195 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:12:29.631207 kernel: fuse: init (API version 7.39) May 8 00:12:29.631219 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:12:29.631231 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:12:29.631243 systemd[1]: Stopped verity-setup.service. May 8 00:12:29.631256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:29.631273 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:12:29.631285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:12:29.631298 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:12:29.631310 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:12:29.631324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:12:29.631340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:12:29.631352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:29.631364 kernel: ACPI: bus type drm_connector registered May 8 00:12:29.631376 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:12:29.631389 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:12:29.631405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:29.631417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:29.631429 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:12:29.631442 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:12:29.631454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:29.631467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:29.640808 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:12:29.640848 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:12:29.640868 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:12:29.640882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:12:29.640895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:12:29.640908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:12:29.640921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:12:29.640933 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:12:29.640979 systemd-journald[1451]: Collecting audit messages is disabled. May 8 00:12:29.641005 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:12:29.641021 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:12:29.641033 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:12:29.641046 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:12:29.641059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:12:29.641074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:12:29.641086 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:12:29.641099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:29.641112 systemd-journald[1451]: Journal started May 8 00:12:29.641137 systemd-journald[1451]: Runtime Journal (/run/log/journal/ec2a8ed03e11c9e6c8219d3f17220a30) is 4.7M, max 38.1M, 33.4M free. May 8 00:12:29.265586 systemd[1]: Queued start job for default target multi-user.target. May 8 00:12:29.279862 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 8 00:12:29.280440 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:12:29.648183 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:12:29.648243 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:12:29.658556 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:12:29.658629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:12:29.668712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:12:29.674931 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:12:29.685987 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:12:29.689724 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:12:29.690537 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:12:29.691294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:29.692748 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:12:29.693310 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:12:29.694101 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:12:29.694747 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:12:29.708300 kernel: loop0: detected capacity change from 0 to 138176 May 8 00:12:29.708948 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:12:29.712926 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:12:29.717406 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:12:29.721777 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:12:29.723298 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:12:29.724171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:29.743782 systemd-journald[1451]: Time spent on flushing to /var/log/journal/ec2a8ed03e11c9e6c8219d3f17220a30 is 65.869ms for 1015 entries. May 8 00:12:29.743782 systemd-journald[1451]: System Journal (/var/log/journal/ec2a8ed03e11c9e6c8219d3f17220a30) is 8M, max 195.6M, 187.6M free. May 8 00:12:29.818314 systemd-journald[1451]: Received client request to flush runtime journal. May 8 00:12:29.787597 udevadm[1513]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:12:29.823407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:12:29.846938 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:12:29.848134 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:12:29.864288 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:12:29.863360 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:12:29.889811 kernel: loop1: detected capacity change from 0 to 218376 May 8 00:12:29.911115 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. May 8 00:12:29.911145 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. May 8 00:12:29.919273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:30.211873 kernel: loop2: detected capacity change from 0 to 147912 May 8 00:12:30.283041 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:12:30.357691 kernel: loop3: detected capacity change from 0 to 62832 May 8 00:12:30.476696 kernel: loop4: detected capacity change from 0 to 138176 May 8 00:12:30.513688 kernel: loop5: detected capacity change from 0 to 218376 May 8 00:12:30.583700 kernel: loop6: detected capacity change from 0 to 147912 May 8 00:12:30.596369 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:12:30.602944 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:30.636695 kernel: loop7: detected capacity change from 0 to 62832 May 8 00:12:30.643967 systemd-udevd[1532]: Using default interface naming scheme 'v255'. May 8 00:12:30.660281 (sd-merge)[1530]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 8 00:12:30.660894 (sd-merge)[1530]: Merged extensions into '/usr'. May 8 00:12:30.667187 systemd[1]: Reload requested from client PID 1484 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:12:30.667203 systemd[1]: Reloading... May 8 00:12:30.758687 zram_generator::config[1567]: No configuration found. May 8 00:12:30.813997 (udev-worker)[1575]: Network interface NamePolicy= disabled on kernel command line. May 8 00:12:30.931681 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:12:30.946685 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 8 00:12:30.971874 kernel: ACPI: button: Power Button [PWRF] May 8 00:12:30.971908 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 8 00:12:30.971932 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 8 00:12:30.976681 kernel: ACPI: button: Sleep Button [SLPF] May 8 00:12:31.019202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:12:31.063723 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:12:31.074822 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1560) May 8 00:12:31.205137 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:12:31.206119 systemd[1]: Reloading finished in 538 ms. May 8 00:12:31.223534 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:31.226582 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:12:31.281459 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:12:31.295357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 8 00:12:31.305262 systemd[1]: Starting ensure-sysext.service... May 8 00:12:31.308686 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:12:31.316980 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:12:31.321168 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:12:31.333897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:12:31.342812 lvm[1721]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:12:31.337491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:31.362976 systemd[1]: Reload requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... May 8 00:12:31.362993 systemd[1]: Reloading... May 8 00:12:31.373808 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:12:31.374082 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:12:31.375372 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:12:31.375961 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. May 8 00:12:31.376095 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. May 8 00:12:31.383966 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:12:31.383979 systemd-tmpfiles[1724]: Skipping /boot May 8 00:12:31.395501 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:12:31.395516 systemd-tmpfiles[1724]: Skipping /boot May 8 00:12:31.419763 ldconfig[1480]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:12:31.442755 zram_generator::config[1757]: No configuration found. May 8 00:12:31.618169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:12:31.723273 systemd[1]: Reloading finished in 359 ms. May 8 00:12:31.736415 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:12:31.750268 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:12:31.751111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:12:31.751913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:31.752740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:31.762096 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:31.768095 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:12:31.774980 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:12:31.785868 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:12:31.791060 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:12:31.796342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:12:31.800695 lvm[1822]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:12:31.801019 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:12:31.812085 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:12:31.821067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:31.821399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:31.829408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:31.838848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:31.844389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:12:31.845311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:31.845516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:12:31.846385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:31.849267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:31.851826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:31.854757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:12:31.861604 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:12:31.864399 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:31.865262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:31.874789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:31.875879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:31.876734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:12:31.876914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:31.880982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:31.881712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:31.896629 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:12:31.898227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:12:31.915275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:31.915772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:31.927699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:31.936081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:12:31.942921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:12:31.944675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:31.944752 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:12:31.944859 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:12:31.945918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:12:31.947546 systemd[1]: Finished ensure-sysext.service. May 8 00:12:31.949803 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:12:31.951436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:31.952725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:31.954879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:31.956285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:31.957855 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:12:31.958101 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:12:31.959995 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:12:31.961937 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:12:31.962315 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:12:31.964089 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:12:31.988514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:12:31.988611 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:12:31.999986 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:12:32.010331 augenrules[1871]: No rules May 8 00:12:32.012810 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:12:32.013752 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:12:32.023139 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:12:32.025003 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:12:32.032763 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:12:32.097379 systemd-networkd[1723]: lo: Link UP May 8 00:12:32.097393 systemd-networkd[1723]: lo: Gained carrier May 8 00:12:32.098784 systemd-resolved[1830]: Positive Trust Anchors: May 8 00:12:32.098798 systemd-resolved[1830]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:12:32.098865 systemd-resolved[1830]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:12:32.099505 systemd-networkd[1723]: Enumeration completed May 8 00:12:32.099812 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:12:32.099967 systemd-networkd[1723]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:32.099973 systemd-networkd[1723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:12:32.103279 systemd-networkd[1723]: eth0: Link UP May 8 00:12:32.103464 systemd-networkd[1723]: eth0: Gained carrier May 8 00:12:32.103491 systemd-networkd[1723]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:32.107982 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:12:32.110712 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:12:32.114781 systemd-networkd[1723]: eth0: DHCPv4 address 172.31.16.158/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 8 00:12:32.123650 systemd-resolved[1830]: Defaulting to hostname 'linux'. May 8 00:12:32.126104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:12:32.126870 systemd[1]: Reached target network.target - Network. May 8 00:12:32.127445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:32.128067 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:12:32.128723 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:12:32.129222 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:12:32.130057 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:12:32.130722 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:12:32.131076 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:12:32.131435 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:12:32.131475 systemd[1]: Reached target paths.target - Path Units. May 8 00:12:32.131856 systemd[1]: Reached target timers.target - Timer Units. May 8 00:12:32.133936 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:12:32.135888 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:12:32.139163 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:12:32.140276 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:12:32.140750 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:12:32.143476 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:12:32.144522 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:12:32.145881 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:12:32.146449 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:12:32.147508 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:12:32.148052 systemd[1]: Reached target basic.target - Basic System. May 8 00:12:32.148509 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:12:32.148551 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:12:32.152808 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:12:32.155893 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:12:32.158866 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:12:32.163791 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:12:32.169207 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:12:32.171351 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:12:32.182575 jq[1888]: false May 8 00:12:32.179879 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:12:32.183856 systemd[1]: Started ntpd.service - Network Time Service. May 8 00:12:32.192849 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:12:32.200837 systemd[1]: Starting setup-oem.service - Setup OEM... May 8 00:12:32.207873 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:12:32.225605 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:12:32.237911 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:12:32.240734 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:12:32.241489 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:12:32.247969 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:12:32.255995 extend-filesystems[1889]: Found loop4 May 8 00:12:32.256896 extend-filesystems[1889]: Found loop5 May 8 00:12:32.256896 extend-filesystems[1889]: Found loop6 May 8 00:12:32.256896 extend-filesystems[1889]: Found loop7 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p1 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p2 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p3 May 8 00:12:32.256896 extend-filesystems[1889]: Found usr May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p4 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p6 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p7 May 8 00:12:32.256896 extend-filesystems[1889]: Found nvme0n1p9 May 8 00:12:32.256896 extend-filesystems[1889]: Checking size of /dev/nvme0n1p9 May 8 00:12:32.271080 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:12:32.276648 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:12:32.277740 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:12:32.315014 update_engine[1900]: I20250508 00:12:32.314914 1900 main.cc:92] Flatcar Update Engine starting May 8 00:12:32.331783 extend-filesystems[1889]: Resized partition /dev/nvme0n1p9 May 8 00:12:32.334973 jq[1903]: true May 8 00:12:32.338615 extend-filesystems[1923]: resize2fs 1.47.1 (20-May-2024) May 8 00:12:32.339251 dbus-daemon[1887]: [system] SELinux support is enabled May 8 00:12:32.339468 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:12:32.345995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:12:32.346280 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:12:32.355639 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:12:32.355729 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:12:32.358823 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:12:32.358852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:12:32.372852 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 8 00:12:32.373796 dbus-daemon[1887]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1723 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 8 00:12:32.374289 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:12:32.374580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:12:32.385769 update_engine[1900]: I20250508 00:12:32.384939 1900 update_check_scheduler.cc:74] Next update check in 10m52s May 8 00:12:32.386344 (ntainerd)[1915]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:12:32.389611 ntpd[1891]: ntpd 4.2.8p17@1.4004-o Wed May 7 21:38:23 UTC 2025 (1): Starting May 8 00:12:32.393068 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: ntpd 4.2.8p17@1.4004-o Wed May 7 21:38:23 UTC 2025 (1): Starting May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: ---------------------------------------------------- May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: ntp-4 is maintained by Network Time Foundation, May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: corporation. Support and training for ntp-4 are May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: available at https://www.nwtime.org/support May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: ---------------------------------------------------- May 8 00:12:32.397700 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: proto: precision = 0.062 usec (-24) May 8 00:12:32.389655 ntpd[1891]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 8 00:12:32.393753 systemd[1]: Started update-engine.service - Update Engine. May 8 00:12:32.401191 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: basedate set to 2025-04-25 May 8 00:12:32.401191 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: gps base set to 2025-04-27 (week 2364) May 8 00:12:32.389689 ntpd[1891]: ---------------------------------------------------- May 8 00:12:32.397129 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:12:32.389699 ntpd[1891]: ntp-4 is maintained by Network Time Foundation, May 8 00:12:32.389709 ntpd[1891]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 8 00:12:32.389719 ntpd[1891]: corporation. Support and training for ntp-4 are May 8 00:12:32.389728 ntpd[1891]: available at https://www.nwtime.org/support May 8 00:12:32.389738 ntpd[1891]: ---------------------------------------------------- May 8 00:12:32.396441 ntpd[1891]: proto: precision = 0.062 usec (-24) May 8 00:12:32.400184 ntpd[1891]: basedate set to 2025-04-25 May 8 00:12:32.400205 ntpd[1891]: gps base set to 2025-04-27 (week 2364) May 8 00:12:32.408378 ntpd[1891]: Listen and drop on 0 v6wildcard [::]:123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Listen and drop on 0 v6wildcard [::]:123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Listen normally on 2 lo 127.0.0.1:123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Listen normally on 3 eth0 172.31.16.158:123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Listen normally on 4 lo [::1]:123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: bind(21) AF_INET6 fe80::4bd:5dff:fef5:798b%2#123 flags 0x11 failed: Cannot assign requested address May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: unable to create socket on eth0 (5) for fe80::4bd:5dff:fef5:798b%2#123 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: failed to init interface for address fe80::4bd:5dff:fef5:798b%2 May 8 00:12:32.412913 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: Listening on routing socket on fd #21 for interface updates May 8 00:12:32.408443 ntpd[1891]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 8 00:12:32.408638 ntpd[1891]: Listen normally on 2 lo 127.0.0.1:123 May 8 00:12:32.411729 ntpd[1891]: Listen normally on 3 eth0 172.31.16.158:123 May 8 00:12:32.411798 ntpd[1891]: Listen normally on 4 lo [::1]:123 May 8 00:12:32.411854 ntpd[1891]: bind(21) AF_INET6 fe80::4bd:5dff:fef5:798b%2#123 flags 0x11 failed: Cannot assign requested address May 8 00:12:32.411875 ntpd[1891]: unable to create socket on eth0 (5) for fe80::4bd:5dff:fef5:798b%2#123 May 8 00:12:32.411892 ntpd[1891]: failed to init interface for address fe80::4bd:5dff:fef5:798b%2 May 8 00:12:32.411928 ntpd[1891]: Listening on routing socket on fd #21 for interface updates May 8 00:12:32.420941 ntpd[1891]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 00:12:32.424142 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 00:12:32.424142 ntpd[1891]: 8 May 00:12:32 ntpd[1891]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 00:12:32.420982 ntpd[1891]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 00:12:32.427708 tar[1905]: linux-amd64/LICENSE May 8 00:12:32.429699 tar[1905]: linux-amd64/helm May 8 00:12:32.447689 jq[1925]: true May 8 00:12:32.475054 systemd[1]: Finished setup-oem.service - Setup OEM. May 8 00:12:32.494126 systemd-logind[1899]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:12:32.504874 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 8 00:12:32.495748 systemd-logind[1899]: Watching system buttons on /dev/input/event3 (Sleep Button) May 8 00:12:32.495777 systemd-logind[1899]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:12:32.496167 systemd-logind[1899]: New seat seat0. May 8 00:12:32.522547 extend-filesystems[1923]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 8 00:12:32.522547 extend-filesystems[1923]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:12:32.522547 extend-filesystems[1923]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 8 00:12:32.530128 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.597 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.599 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.601 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.602 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.605 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.605 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.606 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.607 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.609 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.610 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.611 INFO Fetch failed with 404: resource not found May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.611 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.612 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.612 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.614 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.614 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.617 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.617 INFO Fetch successful May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.617 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 8 00:12:32.643919 coreos-metadata[1886]: May 08 00:12:32.619 INFO Fetch successful May 8 00:12:32.654607 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1561) May 8 00:12:32.654648 extend-filesystems[1889]: Resized filesystem in /dev/nvme0n1p9 May 8 00:12:32.532033 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:12:32.644871 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:12:32.715869 bash[1970]: Updated "/home/core/.ssh/authorized_keys" May 8 00:12:32.718457 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:12:32.739951 systemd[1]: Starting sshkeys.service... May 8 00:12:32.776291 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:12:32.779332 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:12:32.825504 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:12:32.834300 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:12:32.850953 dbus-daemon[1887]: [system] Successfully activated service 'org.freedesktop.hostname1' May 8 00:12:32.851625 dbus-daemon[1887]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1933 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 8 00:12:32.852329 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 8 00:12:32.872134 systemd[1]: Starting polkit.service - Authorization Manager... May 8 00:12:32.946898 polkitd[2022]: Started polkitd version 121 May 8 00:12:32.955971 locksmithd[1935]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:12:33.014219 coreos-metadata[2007]: May 08 00:12:33.012 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 8 00:12:33.014219 coreos-metadata[2007]: May 08 00:12:33.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 8 00:12:33.014219 coreos-metadata[2007]: May 08 00:12:33.013 INFO Fetch successful May 8 00:12:33.014219 coreos-metadata[2007]: May 08 00:12:33.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 8 00:12:33.014771 coreos-metadata[2007]: May 08 00:12:33.014 INFO Fetch successful May 8 00:12:33.021229 unknown[2007]: wrote ssh authorized keys file for user: core May 8 00:12:33.024453 polkitd[2022]: Loading rules from directory /etc/polkit-1/rules.d May 8 00:12:33.028109 polkitd[2022]: Loading rules from directory /usr/share/polkit-1/rules.d May 8 00:12:33.040715 polkitd[2022]: Finished loading, compiling and executing 2 rules May 8 00:12:33.050150 dbus-daemon[1887]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 8 00:12:33.057112 polkitd[2022]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 8 00:12:33.058534 systemd[1]: Started polkit.service - Authorization Manager. May 8 00:12:33.107688 update-ssh-keys[2075]: Updated "/home/core/.ssh/authorized_keys" May 8 00:12:33.100652 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:12:33.104993 systemd[1]: Finished sshkeys.service. May 8 00:12:33.136173 systemd-hostnamed[1933]: Hostname set to (transient) May 8 00:12:33.137021 systemd-resolved[1830]: System hostname changed to 'ip-172-31-16-158'. May 8 00:12:33.151108 systemd-networkd[1723]: eth0: Gained IPv6LL May 8 00:12:33.160328 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:12:33.161593 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:12:33.172265 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 8 00:12:33.186971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:33.198464 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:12:33.268373 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:12:33.340958 amazon-ssm-agent[2087]: Initializing new seelog logger May 8 00:12:33.341343 amazon-ssm-agent[2087]: New Seelog Logger Creation Complete May 8 00:12:33.341491 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.341542 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.342092 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 processing appconfig overrides May 8 00:12:33.343984 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.344073 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.344233 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 processing appconfig overrides May 8 00:12:33.345910 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.346105 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.346243 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 processing appconfig overrides May 8 00:12:33.347958 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO Proxy environment variables: May 8 00:12:33.350410 containerd[1915]: time="2025-05-08T00:12:33.350166653Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:12:33.354210 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.354210 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 00:12:33.354210 amazon-ssm-agent[2087]: 2025/05/08 00:12:33 processing appconfig overrides May 8 00:12:33.446687 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO https_proxy: May 8 00:12:33.474856 containerd[1915]: time="2025-05-08T00:12:33.474695342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.480097 containerd[1915]: time="2025-05-08T00:12:33.480048194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.482692476Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.482737385Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.482951203Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.482978998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483058615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483078789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483373909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483398604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483419799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483434609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483539421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.484224 containerd[1915]: time="2025-05-08T00:12:33.483816805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:33.484724 containerd[1915]: time="2025-05-08T00:12:33.484021816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:33.484724 containerd[1915]: time="2025-05-08T00:12:33.484040355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:12:33.484724 containerd[1915]: time="2025-05-08T00:12:33.484126837Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:12:33.484724 containerd[1915]: time="2025-05-08T00:12:33.484180712Z" level=info msg="metadata content store policy set" policy=shared May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.496327145Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.496413030Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.496437212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.496507169Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.496528861Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.496728012Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497151020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497260348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497280092Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497300000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497319686Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497341404Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497359386Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:12:33.497862 containerd[1915]: time="2025-05-08T00:12:33.497379680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497400609Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497420143Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497437353Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497454597Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497481332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497500601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497518197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497546379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497563616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497583464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497601222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497619797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497637409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498479 containerd[1915]: time="2025-05-08T00:12:33.497657002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497685134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497703894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497722555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497741852Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497768764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497786954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:12:33.498988 containerd[1915]: time="2025-05-08T00:12:33.497801757Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500709767Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500835315Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500856896Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500877228Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500891423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500911980Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500926573Z" level=info msg="NRI interface is disabled by configuration." May 8 00:12:33.504533 containerd[1915]: time="2025-05-08T00:12:33.500942897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:12:33.504931 containerd[1915]: time="2025-05-08T00:12:33.501340620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:12:33.504931 containerd[1915]: time="2025-05-08T00:12:33.501407081Z" level=info msg="Connect containerd service" May 8 00:12:33.504931 containerd[1915]: time="2025-05-08T00:12:33.501451306Z" level=info msg="using legacy CRI server" May 8 00:12:33.504931 containerd[1915]: time="2025-05-08T00:12:33.501460354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:12:33.504931 containerd[1915]: time="2025-05-08T00:12:33.501612100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:12:33.504931 containerd[1915]: time="2025-05-08T00:12:33.504480749Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.507776268Z" level=info msg="Start subscribing containerd event" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.507848275Z" level=info msg="Start recovering state" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.507944647Z" level=info msg="Start event monitor" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.507969118Z" level=info msg="Start snapshots syncer" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.507982256Z" level=info msg="Start cni network conf syncer for default" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.507993549Z" level=info msg="Start streaming server" May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.508503247Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.508562113Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:12:33.509685 containerd[1915]: time="2025-05-08T00:12:33.508627653Z" level=info msg="containerd successfully booted in 0.160431s" May 8 00:12:33.508755 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:12:33.549686 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO http_proxy: May 8 00:12:33.608590 sshd_keygen[1932]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:12:33.647679 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO no_proxy: May 8 00:12:33.672883 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:12:33.684000 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:12:33.693136 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:12:33.694087 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:12:33.707121 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:12:33.728605 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:12:33.730262 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO Checking if agent identity type OnPrem can be assumed May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO Checking if agent identity type EC2 can be assumed May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO Agent will take identity from EC2 May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] using named pipe channel for IPC May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] using named pipe channel for IPC May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] using named pipe channel for IPC May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] Starting Core Agent May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 8 00:12:33.730346 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [Registrar] Starting registrar module May 8 00:12:33.731114 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 8 00:12:33.731114 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [EC2Identity] EC2 registration was successful. May 8 00:12:33.731114 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [CredentialRefresher] credentialRefresher has started May 8 00:12:33.731114 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [CredentialRefresher] Starting credentials refresher loop May 8 00:12:33.731114 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 8 00:12:33.740980 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:12:33.745333 amazon-ssm-agent[2087]: 2025-05-08 00:12:33 INFO [CredentialRefresher] Next credential rotation will be in 32.27499301531667 minutes May 8 00:12:33.750940 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:12:33.751849 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:12:33.913103 tar[1905]: linux-amd64/README.md May 8 00:12:33.923833 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:12:34.741744 amazon-ssm-agent[2087]: 2025-05-08 00:12:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 8 00:12:34.842507 amazon-ssm-agent[2087]: 2025-05-08 00:12:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2130) started May 8 00:12:34.943085 amazon-ssm-agent[2087]: 2025-05-08 00:12:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 8 00:12:35.194627 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:12:35.200394 systemd[1]: Started sshd@0-172.31.16.158:22-139.178.68.195:49144.service - OpenSSH per-connection server daemon (139.178.68.195:49144). May 8 00:12:35.390084 ntpd[1891]: Listen normally on 6 eth0 [fe80::4bd:5dff:fef5:798b%2]:123 May 8 00:12:35.390480 ntpd[1891]: 8 May 00:12:35 ntpd[1891]: Listen normally on 6 eth0 [fe80::4bd:5dff:fef5:798b%2]:123 May 8 00:12:35.435056 sshd[2142]: Accepted publickey for core from 139.178.68.195 port 49144 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:35.437117 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:35.443617 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:12:35.448919 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:12:35.458474 systemd-logind[1899]: New session 1 of user core. May 8 00:12:35.465587 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:12:35.473076 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:12:35.477202 (systemd)[2146]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:12:35.480188 systemd-logind[1899]: New session c1 of user core. May 8 00:12:35.631395 systemd[2146]: Queued start job for default target default.target. May 8 00:12:35.636228 systemd[2146]: Created slice app.slice - User Application Slice. May 8 00:12:35.636262 systemd[2146]: Reached target paths.target - Paths. May 8 00:12:35.636332 systemd[2146]: Reached target timers.target - Timers. May 8 00:12:35.638810 systemd[2146]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:12:35.650161 systemd[2146]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:12:35.650291 systemd[2146]: Reached target sockets.target - Sockets. May 8 00:12:35.650339 systemd[2146]: Reached target basic.target - Basic System. May 8 00:12:35.650377 systemd[2146]: Reached target default.target - Main User Target. May 8 00:12:35.650414 systemd[2146]: Startup finished in 163ms. May 8 00:12:35.650469 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:12:35.659053 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:12:35.745810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:35.746888 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:12:35.752361 systemd[1]: Startup finished in 595ms (kernel) + 6.442s (initrd) + 7.655s (userspace) = 14.692s. May 8 00:12:35.753945 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:12:35.810045 systemd[1]: Started sshd@1-172.31.16.158:22-139.178.68.195:49158.service - OpenSSH per-connection server daemon (139.178.68.195:49158). May 8 00:12:35.976601 sshd[2167]: Accepted publickey for core from 139.178.68.195 port 49158 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:35.978062 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:35.983481 systemd-logind[1899]: New session 2 of user core. May 8 00:12:35.988850 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:12:36.112845 sshd[2169]: Connection closed by 139.178.68.195 port 49158 May 8 00:12:36.113576 sshd-session[2167]: pam_unix(sshd:session): session closed for user core May 8 00:12:36.116706 systemd[1]: sshd@1-172.31.16.158:22-139.178.68.195:49158.service: Deactivated successfully. May 8 00:12:36.118534 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:12:36.120512 systemd-logind[1899]: Session 2 logged out. Waiting for processes to exit. May 8 00:12:36.121822 systemd-logind[1899]: Removed session 2. May 8 00:12:36.143819 systemd[1]: Started sshd@2-172.31.16.158:22-139.178.68.195:49162.service - OpenSSH per-connection server daemon (139.178.68.195:49162). May 8 00:12:36.309812 sshd[2175]: Accepted publickey for core from 139.178.68.195 port 49162 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:36.311195 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:36.316456 systemd-logind[1899]: New session 3 of user core. May 8 00:12:36.324961 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:12:36.441186 sshd[2181]: Connection closed by 139.178.68.195 port 49162 May 8 00:12:36.441767 sshd-session[2175]: pam_unix(sshd:session): session closed for user core May 8 00:12:36.445317 systemd[1]: sshd@2-172.31.16.158:22-139.178.68.195:49162.service: Deactivated successfully. May 8 00:12:36.447069 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:12:36.448601 systemd-logind[1899]: Session 3 logged out. Waiting for processes to exit. May 8 00:12:36.449681 systemd-logind[1899]: Removed session 3. May 8 00:12:36.474956 systemd[1]: Started sshd@3-172.31.16.158:22-139.178.68.195:49168.service - OpenSSH per-connection server daemon (139.178.68.195:49168). May 8 00:12:36.634893 sshd[2187]: Accepted publickey for core from 139.178.68.195 port 49168 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:36.636301 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:36.641917 systemd-logind[1899]: New session 4 of user core. May 8 00:12:36.647872 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:12:36.768079 sshd[2189]: Connection closed by 139.178.68.195 port 49168 May 8 00:12:36.768889 sshd-session[2187]: pam_unix(sshd:session): session closed for user core May 8 00:12:36.771570 systemd[1]: sshd@3-172.31.16.158:22-139.178.68.195:49168.service: Deactivated successfully. May 8 00:12:36.773935 systemd-logind[1899]: Session 4 logged out. Waiting for processes to exit. May 8 00:12:36.774705 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:12:36.779358 systemd-logind[1899]: Removed session 4. May 8 00:12:36.806290 systemd[1]: Started sshd@4-172.31.16.158:22-139.178.68.195:49176.service - OpenSSH per-connection server daemon (139.178.68.195:49176). May 8 00:12:36.966242 sshd[2195]: Accepted publickey for core from 139.178.68.195 port 49176 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:36.968039 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:36.972435 systemd-logind[1899]: New session 5 of user core. May 8 00:12:36.986914 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:12:37.127656 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:12:37.128042 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:37.144382 sudo[2198]: pam_unix(sudo:session): session closed for user root May 8 00:12:37.167046 sshd[2197]: Connection closed by 139.178.68.195 port 49176 May 8 00:12:37.167808 sshd-session[2195]: pam_unix(sshd:session): session closed for user core May 8 00:12:37.172918 systemd[1]: sshd@4-172.31.16.158:22-139.178.68.195:49176.service: Deactivated successfully. May 8 00:12:37.174889 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:12:37.175797 systemd-logind[1899]: Session 5 logged out. Waiting for processes to exit. May 8 00:12:37.177166 systemd-logind[1899]: Removed session 5. May 8 00:12:37.213994 systemd[1]: Started sshd@5-172.31.16.158:22-139.178.68.195:49186.service - OpenSSH per-connection server daemon (139.178.68.195:49186). May 8 00:12:37.378075 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 49186 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:37.379475 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:37.384606 systemd-logind[1899]: New session 6 of user core. May 8 00:12:37.396913 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:12:37.493554 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:12:37.493979 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:37.498048 sudo[2209]: pam_unix(sudo:session): session closed for user root May 8 00:12:37.507741 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:12:37.508157 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:37.530643 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:12:37.571853 augenrules[2232]: No rules May 8 00:12:37.573442 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:12:37.573739 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:12:37.575292 sudo[2208]: pam_unix(sudo:session): session closed for user root May 8 00:12:37.592778 kubelet[2160]: E0508 00:12:37.592685 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:12:37.594623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:12:37.594794 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:12:37.595063 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 252.3M memory peak. May 8 00:12:37.598555 sshd[2207]: Connection closed by 139.178.68.195 port 49186 May 8 00:12:37.599209 sshd-session[2205]: pam_unix(sshd:session): session closed for user core May 8 00:12:37.602493 systemd[1]: sshd@5-172.31.16.158:22-139.178.68.195:49186.service: Deactivated successfully. May 8 00:12:37.604916 systemd-logind[1899]: Session 6 logged out. Waiting for processes to exit. May 8 00:12:37.605356 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:12:37.606674 systemd-logind[1899]: Removed session 6. May 8 00:12:37.638014 systemd[1]: Started sshd@6-172.31.16.158:22-139.178.68.195:49194.service - OpenSSH per-connection server daemon (139.178.68.195:49194). May 8 00:12:37.802728 sshd[2242]: Accepted publickey for core from 139.178.68.195 port 49194 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:12:37.816369 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:37.828637 systemd-logind[1899]: New session 7 of user core. May 8 00:12:37.840047 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:12:37.936622 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:12:37.936929 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:38.557016 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:12:38.558973 (dockerd)[2261]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:12:39.225346 dockerd[2261]: time="2025-05-08T00:12:39.225290640Z" level=info msg="Starting up" May 8 00:12:40.440392 systemd-resolved[1830]: Clock change detected. Flushing caches. May 8 00:12:40.519077 systemd[1]: var-lib-docker-metacopy\x2dcheck2024986534-merged.mount: Deactivated successfully. May 8 00:12:40.535396 dockerd[2261]: time="2025-05-08T00:12:40.535269586Z" level=info msg="Loading containers: start." May 8 00:12:40.726901 kernel: Initializing XFRM netlink socket May 8 00:12:40.767600 (udev-worker)[2286]: Network interface NamePolicy= disabled on kernel command line. May 8 00:12:40.831906 systemd-networkd[1723]: docker0: Link UP May 8 00:12:40.859584 dockerd[2261]: time="2025-05-08T00:12:40.859543640Z" level=info msg="Loading containers: done." May 8 00:12:40.878272 dockerd[2261]: time="2025-05-08T00:12:40.878218811Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:12:40.878440 dockerd[2261]: time="2025-05-08T00:12:40.878321122Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:12:40.878440 dockerd[2261]: time="2025-05-08T00:12:40.878425392Z" level=info msg="Daemon has completed initialization" May 8 00:12:40.910092 dockerd[2261]: time="2025-05-08T00:12:40.910002103Z" level=info msg="API listen on /run/docker.sock" May 8 00:12:40.910897 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:12:42.798389 containerd[1915]: time="2025-05-08T00:12:42.798349584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:12:43.410440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722542550.mount: Deactivated successfully. May 8 00:12:44.552500 containerd[1915]: time="2025-05-08T00:12:44.552445142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:44.553525 containerd[1915]: time="2025-05-08T00:12:44.553482274Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 8 00:12:44.555131 containerd[1915]: time="2025-05-08T00:12:44.555084105Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:44.558439 containerd[1915]: time="2025-05-08T00:12:44.558142285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:44.559695 containerd[1915]: time="2025-05-08T00:12:44.558980132Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.76059363s" May 8 00:12:44.559695 containerd[1915]: time="2025-05-08T00:12:44.559009666Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:12:44.560012 containerd[1915]: time="2025-05-08T00:12:44.559984927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:12:46.033817 containerd[1915]: time="2025-05-08T00:12:46.033752963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:46.034871 containerd[1915]: time="2025-05-08T00:12:46.034826886Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 8 00:12:46.036311 containerd[1915]: time="2025-05-08T00:12:46.036257774Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:46.039027 containerd[1915]: time="2025-05-08T00:12:46.038994201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:46.040403 containerd[1915]: time="2025-05-08T00:12:46.040207772Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.480187658s" May 8 00:12:46.040403 containerd[1915]: time="2025-05-08T00:12:46.040255590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:12:46.040931 containerd[1915]: time="2025-05-08T00:12:46.040704955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:12:47.583771 containerd[1915]: time="2025-05-08T00:12:47.583706972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:47.584831 containerd[1915]: time="2025-05-08T00:12:47.584765075Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 8 00:12:47.585907 containerd[1915]: time="2025-05-08T00:12:47.585856958Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:47.588836 containerd[1915]: time="2025-05-08T00:12:47.588454949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:47.589789 containerd[1915]: time="2025-05-08T00:12:47.589398024Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.548662006s" May 8 00:12:47.589789 containerd[1915]: time="2025-05-08T00:12:47.589427300Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:12:47.590240 containerd[1915]: time="2025-05-08T00:12:47.590157826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:12:48.653600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293312111.mount: Deactivated successfully. May 8 00:12:48.656451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:12:48.665280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:48.876960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:48.889284 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:12:48.968697 kubelet[2529]: E0508 00:12:48.968419 2529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:12:48.973281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:12:48.973465 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:12:48.973916 systemd[1]: kubelet.service: Consumed 178ms CPU time, 104.3M memory peak. May 8 00:12:49.293066 containerd[1915]: time="2025-05-08T00:12:49.293015481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:49.293877 containerd[1915]: time="2025-05-08T00:12:49.293830094Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:12:49.295152 containerd[1915]: time="2025-05-08T00:12:49.295101180Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:49.297139 containerd[1915]: time="2025-05-08T00:12:49.297091687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:49.297764 containerd[1915]: time="2025-05-08T00:12:49.297605155Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.707293497s" May 8 00:12:49.297764 containerd[1915]: time="2025-05-08T00:12:49.297636430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:12:49.298144 containerd[1915]: time="2025-05-08T00:12:49.298090209Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:12:49.869431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971083886.mount: Deactivated successfully. May 8 00:12:50.825414 containerd[1915]: time="2025-05-08T00:12:50.825359637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:50.826487 containerd[1915]: time="2025-05-08T00:12:50.826443918Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 8 00:12:50.827414 containerd[1915]: time="2025-05-08T00:12:50.827370584Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:50.830055 containerd[1915]: time="2025-05-08T00:12:50.829991085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:50.831339 containerd[1915]: time="2025-05-08T00:12:50.830994260Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.532882947s" May 8 00:12:50.831339 containerd[1915]: time="2025-05-08T00:12:50.831025933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:12:50.831768 containerd[1915]: time="2025-05-08T00:12:50.831747021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:12:51.328157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036368633.mount: Deactivated successfully. May 8 00:12:51.333631 containerd[1915]: time="2025-05-08T00:12:51.333563653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:51.334522 containerd[1915]: time="2025-05-08T00:12:51.334479180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:12:51.335565 containerd[1915]: time="2025-05-08T00:12:51.335514222Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:51.337760 containerd[1915]: time="2025-05-08T00:12:51.337717095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:51.338895 containerd[1915]: time="2025-05-08T00:12:51.338425886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.652644ms" May 8 00:12:51.338895 containerd[1915]: time="2025-05-08T00:12:51.338454343Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:12:51.339017 containerd[1915]: time="2025-05-08T00:12:51.338965005Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:12:51.856940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222121062.mount: Deactivated successfully. May 8 00:12:53.930416 containerd[1915]: time="2025-05-08T00:12:53.930277525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:53.931674 containerd[1915]: time="2025-05-08T00:12:53.931616801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 8 00:12:53.932924 containerd[1915]: time="2025-05-08T00:12:53.932880393Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:53.935784 containerd[1915]: time="2025-05-08T00:12:53.935741866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:53.937612 containerd[1915]: time="2025-05-08T00:12:53.936754658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.597767115s" May 8 00:12:53.937612 containerd[1915]: time="2025-05-08T00:12:53.936787403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:12:56.533545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:56.533826 systemd[1]: kubelet.service: Consumed 178ms CPU time, 104.3M memory peak. May 8 00:12:56.542343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:56.578685 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... May 8 00:12:56.578703 systemd[1]: Reloading... May 8 00:12:56.711837 zram_generator::config[2721]: No configuration found. May 8 00:12:56.858941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:12:56.976998 systemd[1]: Reloading finished in 397 ms. May 8 00:12:57.026548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:57.036459 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:57.038560 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:12:57.038850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:57.038911 systemd[1]: kubelet.service: Consumed 120ms CPU time, 91.6M memory peak. May 8 00:12:57.040983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:57.258689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:57.264438 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:12:57.313093 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:12:57.313492 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:12:57.313492 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:12:57.316646 kubelet[2786]: I0508 00:12:57.316566 2786 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:12:57.698782 kubelet[2786]: I0508 00:12:57.698370 2786 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:12:57.698782 kubelet[2786]: I0508 00:12:57.698405 2786 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:12:57.699008 kubelet[2786]: I0508 00:12:57.698988 2786 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:12:57.741581 kubelet[2786]: I0508 00:12:57.740716 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:12:57.741581 kubelet[2786]: E0508 00:12:57.741503 2786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:57.752391 kubelet[2786]: E0508 00:12:57.752336 2786 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:12:57.752391 kubelet[2786]: I0508 00:12:57.752392 2786 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:12:57.757207 kubelet[2786]: I0508 00:12:57.757169 2786 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:12:57.762621 kubelet[2786]: I0508 00:12:57.762552 2786 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:12:57.762903 kubelet[2786]: I0508 00:12:57.762621 2786 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-158","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:12:57.764980 kubelet[2786]: I0508 00:12:57.764953 2786 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:12:57.764980 kubelet[2786]: I0508 00:12:57.764983 2786 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:12:57.765177 kubelet[2786]: I0508 00:12:57.765154 2786 state_mem.go:36] "Initialized new in-memory state store" May 8 00:12:57.771127 kubelet[2786]: I0508 00:12:57.771090 2786 kubelet.go:446] "Attempting to sync node with API server" May 8 00:12:57.771127 kubelet[2786]: I0508 00:12:57.771131 2786 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:12:57.771127 kubelet[2786]: I0508 00:12:57.771158 2786 kubelet.go:352] "Adding apiserver pod source" May 8 00:12:57.771585 kubelet[2786]: I0508 00:12:57.771172 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:12:57.786215 kubelet[2786]: W0508 00:12:57.786130 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:57.786343 kubelet[2786]: E0508 00:12:57.786242 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:57.786420 kubelet[2786]: W0508 00:12:57.786348 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-158&limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:57.786420 kubelet[2786]: E0508 00:12:57.786393 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-158&limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:57.786537 kubelet[2786]: I0508 00:12:57.786513 2786 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:12:57.791312 kubelet[2786]: I0508 00:12:57.791266 2786 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:12:57.792166 kubelet[2786]: W0508 00:12:57.792137 2786 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:12:57.793055 kubelet[2786]: I0508 00:12:57.792828 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:12:57.793055 kubelet[2786]: I0508 00:12:57.792870 2786 server.go:1287] "Started kubelet" May 8 00:12:57.793055 kubelet[2786]: I0508 00:12:57.793033 2786 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:12:57.794597 kubelet[2786]: I0508 00:12:57.794091 2786 server.go:490] "Adding debug handlers to kubelet server" May 8 00:12:57.798858 kubelet[2786]: I0508 00:12:57.797950 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:12:57.798977 kubelet[2786]: I0508 00:12:57.798923 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:12:57.799170 kubelet[2786]: I0508 00:12:57.799145 2786 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:12:57.804219 kubelet[2786]: E0508 00:12:57.800663 2786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.158:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.158:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-158.183d64e7ee98f2ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-158,UID:ip-172-31-16-158,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-158,},FirstTimestamp:2025-05-08 00:12:57.792844494 +0000 UTC m=+0.524988764,LastTimestamp:2025-05-08 00:12:57.792844494 +0000 UTC m=+0.524988764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-158,}" May 8 00:12:57.804425 kubelet[2786]: I0508 00:12:57.804258 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:12:57.811914 kubelet[2786]: E0508 00:12:57.810549 2786 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-16-158\" not found" May 8 00:12:57.811914 kubelet[2786]: I0508 00:12:57.810601 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:12:57.813206 kubelet[2786]: I0508 00:12:57.812704 2786 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:12:57.813206 kubelet[2786]: I0508 00:12:57.812776 2786 reconciler.go:26] "Reconciler: start to sync state" May 8 00:12:57.814525 kubelet[2786]: W0508 00:12:57.814455 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:57.814619 kubelet[2786]: E0508 00:12:57.814544 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:57.816037 kubelet[2786]: E0508 00:12:57.814833 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": dial tcp 172.31.16.158:6443: connect: connection refused" interval="200ms" May 8 00:12:57.817107 kubelet[2786]: I0508 00:12:57.817081 2786 factory.go:221] Registration of the systemd container factory successfully May 8 00:12:57.817200 kubelet[2786]: I0508 00:12:57.817184 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:12:57.823832 kubelet[2786]: I0508 00:12:57.822756 2786 factory.go:221] Registration of the containerd container factory successfully May 8 00:12:57.846419 kubelet[2786]: I0508 00:12:57.846365 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:12:57.848198 kubelet[2786]: I0508 00:12:57.848170 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:12:57.848198 kubelet[2786]: I0508 00:12:57.848200 2786 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:12:57.848340 kubelet[2786]: I0508 00:12:57.848223 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:12:57.848340 kubelet[2786]: I0508 00:12:57.848233 2786 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:12:57.848340 kubelet[2786]: E0508 00:12:57.848279 2786 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:12:57.853175 kubelet[2786]: W0508 00:12:57.853144 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:57.853357 kubelet[2786]: E0508 00:12:57.853332 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:57.853962 kubelet[2786]: I0508 00:12:57.853944 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:12:57.854082 kubelet[2786]: I0508 00:12:57.854071 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:12:57.854155 kubelet[2786]: I0508 00:12:57.854146 2786 state_mem.go:36] "Initialized new in-memory state store" May 8 00:12:57.856919 kubelet[2786]: I0508 00:12:57.856905 2786 policy_none.go:49] "None policy: Start" May 8 00:12:57.857005 kubelet[2786]: I0508 00:12:57.856998 2786 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:12:57.857053 kubelet[2786]: I0508 00:12:57.857047 2786 state_mem.go:35] "Initializing new in-memory state store" May 8 00:12:57.862994 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:12:57.880155 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:12:57.885140 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:12:57.894450 kubelet[2786]: I0508 00:12:57.893780 2786 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:12:57.894450 kubelet[2786]: I0508 00:12:57.894109 2786 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:12:57.894450 kubelet[2786]: I0508 00:12:57.894123 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:12:57.894450 kubelet[2786]: I0508 00:12:57.894375 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:12:57.895865 kubelet[2786]: E0508 00:12:57.895847 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:12:57.896145 kubelet[2786]: E0508 00:12:57.896134 2786 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-158\" not found" May 8 00:12:57.959072 systemd[1]: Created slice kubepods-burstable-pod8c34d67027fd37ecbcf112474496d19a.slice - libcontainer container kubepods-burstable-pod8c34d67027fd37ecbcf112474496d19a.slice. May 8 00:12:57.966267 kubelet[2786]: E0508 00:12:57.966007 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:12:57.970020 systemd[1]: Created slice kubepods-burstable-pod5a5f7edb90150ece389355bb097a0e5b.slice - libcontainer container kubepods-burstable-pod5a5f7edb90150ece389355bb097a0e5b.slice. May 8 00:12:57.983530 kubelet[2786]: E0508 00:12:57.983323 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:12:57.986214 systemd[1]: Created slice kubepods-burstable-pod649bdd8d92f0911de8a87c3838928e1c.slice - libcontainer container kubepods-burstable-pod649bdd8d92f0911de8a87c3838928e1c.slice. May 8 00:12:57.988040 kubelet[2786]: E0508 00:12:57.988014 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:12:57.996829 kubelet[2786]: I0508 00:12:57.996688 2786 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:12:57.997038 kubelet[2786]: E0508 00:12:57.997011 2786 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.158:6443/api/v1/nodes\": dial tcp 172.31.16.158:6443: connect: connection refused" node="ip-172-31-16-158" May 8 00:12:58.014069 kubelet[2786]: I0508 00:12:58.013795 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c34d67027fd37ecbcf112474496d19a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-158\" (UID: \"8c34d67027fd37ecbcf112474496d19a\") " pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:12:58.014069 kubelet[2786]: I0508 00:12:58.013948 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:12:58.014069 kubelet[2786]: I0508 00:12:58.013978 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:12:58.014069 kubelet[2786]: I0508 00:12:58.014001 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:12:58.014069 kubelet[2786]: I0508 00:12:58.014017 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:12:58.014315 kubelet[2786]: I0508 00:12:58.014043 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:12:58.014315 kubelet[2786]: I0508 00:12:58.014106 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c34d67027fd37ecbcf112474496d19a-ca-certs\") pod \"kube-apiserver-ip-172-31-16-158\" (UID: \"8c34d67027fd37ecbcf112474496d19a\") " pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:12:58.014315 kubelet[2786]: I0508 00:12:58.014144 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/649bdd8d92f0911de8a87c3838928e1c-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-158\" (UID: \"649bdd8d92f0911de8a87c3838928e1c\") " pod="kube-system/kube-scheduler-ip-172-31-16-158" May 8 00:12:58.014315 kubelet[2786]: I0508 00:12:58.014165 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c34d67027fd37ecbcf112474496d19a-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-158\" (UID: \"8c34d67027fd37ecbcf112474496d19a\") " pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:12:58.018252 kubelet[2786]: E0508 00:12:58.018210 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": dial tcp 172.31.16.158:6443: connect: connection refused" interval="400ms" May 8 00:12:58.199090 kubelet[2786]: I0508 00:12:58.199028 2786 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:12:58.199404 kubelet[2786]: E0508 00:12:58.199348 2786 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.158:6443/api/v1/nodes\": dial tcp 172.31.16.158:6443: connect: connection refused" node="ip-172-31-16-158" May 8 00:12:58.267146 containerd[1915]: time="2025-05-08T00:12:58.267096422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-158,Uid:8c34d67027fd37ecbcf112474496d19a,Namespace:kube-system,Attempt:0,}" May 8 00:12:58.284887 containerd[1915]: time="2025-05-08T00:12:58.284845673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-158,Uid:5a5f7edb90150ece389355bb097a0e5b,Namespace:kube-system,Attempt:0,}" May 8 00:12:58.288861 containerd[1915]: time="2025-05-08T00:12:58.288804289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-158,Uid:649bdd8d92f0911de8a87c3838928e1c,Namespace:kube-system,Attempt:0,}" May 8 00:12:58.419536 kubelet[2786]: E0508 00:12:58.419481 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": dial tcp 172.31.16.158:6443: connect: connection refused" interval="800ms" May 8 00:12:58.601383 kubelet[2786]: I0508 00:12:58.601274 2786 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:12:58.601682 kubelet[2786]: E0508 00:12:58.601653 2786 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.158:6443/api/v1/nodes\": dial tcp 172.31.16.158:6443: connect: connection refused" node="ip-172-31-16-158" May 8 00:12:58.750449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764242702.mount: Deactivated successfully. May 8 00:12:58.755993 containerd[1915]: time="2025-05-08T00:12:58.755944266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:12:58.756872 containerd[1915]: time="2025-05-08T00:12:58.756833804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:12:58.760608 containerd[1915]: time="2025-05-08T00:12:58.760563337Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:12:58.763307 containerd[1915]: time="2025-05-08T00:12:58.763255658Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:12:58.763894 containerd[1915]: time="2025-05-08T00:12:58.763844267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:12:58.765547 containerd[1915]: time="2025-05-08T00:12:58.765492887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:12:58.767083 containerd[1915]: time="2025-05-08T00:12:58.767043511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:12:58.768057 containerd[1915]: time="2025-05-08T00:12:58.767834917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.333503ms" May 8 00:12:58.772004 containerd[1915]: time="2025-05-08T00:12:58.771962509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.029186ms" May 8 00:12:58.774089 containerd[1915]: time="2025-05-08T00:12:58.774039748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.11809ms" May 8 00:12:58.774223 containerd[1915]: time="2025-05-08T00:12:58.767843812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:12:58.788990 kubelet[2786]: W0508 00:12:58.788871 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-158&limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:58.788990 kubelet[2786]: E0508 00:12:58.788937 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-158&limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:59.012562 containerd[1915]: time="2025-05-08T00:12:59.009785452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:12:59.012562 containerd[1915]: time="2025-05-08T00:12:59.012334474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:12:59.012562 containerd[1915]: time="2025-05-08T00:12:59.012369040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:12:59.012562 containerd[1915]: time="2025-05-08T00:12:59.012496292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:12:59.016266 containerd[1915]: time="2025-05-08T00:12:59.015980616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:12:59.016266 containerd[1915]: time="2025-05-08T00:12:59.016068664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:12:59.016266 containerd[1915]: time="2025-05-08T00:12:59.016087461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:12:59.016266 containerd[1915]: time="2025-05-08T00:12:59.016201020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:12:59.020998 containerd[1915]: time="2025-05-08T00:12:59.020478274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:12:59.020998 containerd[1915]: time="2025-05-08T00:12:59.020538112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:12:59.020998 containerd[1915]: time="2025-05-08T00:12:59.020555165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:12:59.020998 containerd[1915]: time="2025-05-08T00:12:59.020642271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:12:59.049101 systemd[1]: Started cri-containerd-3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363.scope - libcontainer container 3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363. May 8 00:12:59.075037 systemd[1]: Started cri-containerd-b7c0e4f64a31e287789b839b4dcc771f9052c46d108b83f148ebf50fb0ff875e.scope - libcontainer container b7c0e4f64a31e287789b839b4dcc771f9052c46d108b83f148ebf50fb0ff875e. May 8 00:12:59.080356 systemd[1]: Started cri-containerd-8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7.scope - libcontainer container 8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7. May 8 00:12:59.155452 containerd[1915]: time="2025-05-08T00:12:59.154917436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-158,Uid:5a5f7edb90150ece389355bb097a0e5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363\"" May 8 00:12:59.169906 containerd[1915]: time="2025-05-08T00:12:59.169718726Z" level=info msg="CreateContainer within sandbox \"3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:12:59.170532 containerd[1915]: time="2025-05-08T00:12:59.170267822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-158,Uid:8c34d67027fd37ecbcf112474496d19a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7c0e4f64a31e287789b839b4dcc771f9052c46d108b83f148ebf50fb0ff875e\"" May 8 00:12:59.176715 containerd[1915]: time="2025-05-08T00:12:59.176580584Z" level=info msg="CreateContainer within sandbox \"b7c0e4f64a31e287789b839b4dcc771f9052c46d108b83f148ebf50fb0ff875e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:12:59.194192 containerd[1915]: time="2025-05-08T00:12:59.194152859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-158,Uid:649bdd8d92f0911de8a87c3838928e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7\"" May 8 00:12:59.200648 containerd[1915]: time="2025-05-08T00:12:59.200479916Z" level=info msg="CreateContainer within sandbox \"8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:12:59.220757 kubelet[2786]: E0508 00:12:59.220723 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": dial tcp 172.31.16.158:6443: connect: connection refused" interval="1.6s" May 8 00:12:59.230612 containerd[1915]: time="2025-05-08T00:12:59.230563750Z" level=info msg="CreateContainer within sandbox \"b7c0e4f64a31e287789b839b4dcc771f9052c46d108b83f148ebf50fb0ff875e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a363fae671d71b38c73b9c3abe4e083c23a4f2837c7c127cf6a1aa541c2815b\"" May 8 00:12:59.231334 containerd[1915]: time="2025-05-08T00:12:59.231220069Z" level=info msg="StartContainer for \"9a363fae671d71b38c73b9c3abe4e083c23a4f2837c7c127cf6a1aa541c2815b\"" May 8 00:12:59.239713 containerd[1915]: time="2025-05-08T00:12:59.239515584Z" level=info msg="CreateContainer within sandbox \"8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889\"" May 8 00:12:59.242665 containerd[1915]: time="2025-05-08T00:12:59.242623942Z" level=info msg="CreateContainer within sandbox \"3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7\"" May 8 00:12:59.243558 containerd[1915]: time="2025-05-08T00:12:59.243530681Z" level=info msg="StartContainer for \"dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889\"" May 8 00:12:59.244333 containerd[1915]: time="2025-05-08T00:12:59.244308171Z" level=info msg="StartContainer for \"ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7\"" May 8 00:12:59.277888 systemd[1]: Started cri-containerd-9a363fae671d71b38c73b9c3abe4e083c23a4f2837c7c127cf6a1aa541c2815b.scope - libcontainer container 9a363fae671d71b38c73b9c3abe4e083c23a4f2837c7c127cf6a1aa541c2815b. May 8 00:12:59.300056 systemd[1]: Started cri-containerd-ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7.scope - libcontainer container ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7. May 8 00:12:59.310243 systemd[1]: Started cri-containerd-dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889.scope - libcontainer container dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889. May 8 00:12:59.348554 kubelet[2786]: W0508 00:12:59.348387 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:59.348554 kubelet[2786]: E0508 00:12:59.348491 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:59.373256 kubelet[2786]: W0508 00:12:59.373128 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:59.373460 kubelet[2786]: E0508 00:12:59.373229 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:59.401563 containerd[1915]: time="2025-05-08T00:12:59.401039704Z" level=info msg="StartContainer for \"9a363fae671d71b38c73b9c3abe4e083c23a4f2837c7c127cf6a1aa541c2815b\" returns successfully" May 8 00:12:59.408439 kubelet[2786]: I0508 00:12:59.408311 2786 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:12:59.410206 kubelet[2786]: E0508 00:12:59.409290 2786 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.158:6443/api/v1/nodes\": dial tcp 172.31.16.158:6443: connect: connection refused" node="ip-172-31-16-158" May 8 00:12:59.412100 containerd[1915]: time="2025-05-08T00:12:59.412061414Z" level=info msg="StartContainer for \"ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7\" returns successfully" May 8 00:12:59.412322 containerd[1915]: time="2025-05-08T00:12:59.412293557Z" level=info msg="StartContainer for \"dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889\" returns successfully" May 8 00:12:59.449832 kubelet[2786]: W0508 00:12:59.449699 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.158:6443: connect: connection refused May 8 00:12:59.449832 kubelet[2786]: E0508 00:12:59.449768 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:12:59.866428 kubelet[2786]: E0508 00:12:59.866353 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:12:59.868766 kubelet[2786]: E0508 00:12:59.868698 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:12:59.869339 kubelet[2786]: E0508 00:12:59.869190 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:12:59.927983 kubelet[2786]: E0508 00:12:59.927944 2786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.158:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:00.822418 kubelet[2786]: E0508 00:13:00.822368 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": dial tcp 172.31.16.158:6443: connect: connection refused" interval="3.2s" May 8 00:13:00.872926 kubelet[2786]: E0508 00:13:00.872894 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:13:00.874492 kubelet[2786]: E0508 00:13:00.873789 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:13:01.012690 kubelet[2786]: I0508 00:13:01.012663 2786 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:13:01.013136 kubelet[2786]: E0508 00:13:01.013108 2786 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.158:6443/api/v1/nodes\": dial tcp 172.31.16.158:6443: connect: connection refused" node="ip-172-31-16-158" May 8 00:13:02.987602 kubelet[2786]: E0508 00:13:02.987325 2786 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:13:04.192299 kubelet[2786]: E0508 00:13:04.192245 2786 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-158\" not found" node="ip-172-31-16-158" May 8 00:13:04.215835 kubelet[2786]: I0508 00:13:04.215478 2786 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:13:04.219605 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 8 00:13:04.245992 kubelet[2786]: I0508 00:13:04.245954 2786 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-16-158" May 8 00:13:04.246123 kubelet[2786]: E0508 00:13:04.246010 2786 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-158\": node \"ip-172-31-16-158\" not found" May 8 00:13:04.314835 kubelet[2786]: I0508 00:13:04.312120 2786 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:04.321270 kubelet[2786]: E0508 00:13:04.321241 2786 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-158\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:04.321440 kubelet[2786]: I0508 00:13:04.321315 2786 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-158" May 8 00:13:04.325834 kubelet[2786]: E0508 00:13:04.324160 2786 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-158\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-158" May 8 00:13:04.325834 kubelet[2786]: I0508 00:13:04.324207 2786 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:04.327103 kubelet[2786]: E0508 00:13:04.327030 2786 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-158\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:04.786305 kubelet[2786]: I0508 00:13:04.786260 2786 apiserver.go:52] "Watching apiserver" May 8 00:13:04.812942 kubelet[2786]: I0508 00:13:04.812882 2786 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:13:06.226797 systemd[1]: Reload requested from client PID 3064 ('systemctl') (unit session-7.scope)... May 8 00:13:06.226824 systemd[1]: Reloading... May 8 00:13:06.333913 zram_generator::config[3109]: No configuration found. May 8 00:13:06.482334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:06.624511 systemd[1]: Reloading finished in 397 ms. May 8 00:13:06.656142 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:06.675293 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:13:06.675525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:06.675604 systemd[1]: kubelet.service: Consumed 950ms CPU time, 120M memory peak. May 8 00:13:06.681214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:06.958697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:06.995328 (kubelet)[3169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:13:07.066662 kubelet[3169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:07.066662 kubelet[3169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:13:07.066662 kubelet[3169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:07.067116 kubelet[3169]: I0508 00:13:07.066719 3169 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:13:07.074052 kubelet[3169]: I0508 00:13:07.074006 3169 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:13:07.074052 kubelet[3169]: I0508 00:13:07.074036 3169 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:13:07.074323 kubelet[3169]: I0508 00:13:07.074289 3169 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:13:07.077170 kubelet[3169]: I0508 00:13:07.077125 3169 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:13:07.090588 kubelet[3169]: I0508 00:13:07.090198 3169 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:13:07.094822 kubelet[3169]: E0508 00:13:07.094765 3169 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:13:07.094822 kubelet[3169]: I0508 00:13:07.094802 3169 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:13:07.099834 kubelet[3169]: I0508 00:13:07.099298 3169 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:13:07.099834 kubelet[3169]: I0508 00:13:07.099551 3169 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:13:07.100017 kubelet[3169]: I0508 00:13:07.099594 3169 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-158","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:13:07.100017 kubelet[3169]: I0508 00:13:07.100000 3169 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:13:07.100182 kubelet[3169]: I0508 00:13:07.100021 3169 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:13:07.100182 kubelet[3169]: I0508 00:13:07.100072 3169 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:07.100281 kubelet[3169]: I0508 00:13:07.100265 3169 kubelet.go:446] "Attempting to sync node with API server" May 8 00:13:07.100325 kubelet[3169]: I0508 00:13:07.100290 3169 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:13:07.100325 kubelet[3169]: I0508 00:13:07.100315 3169 kubelet.go:352] "Adding apiserver pod source" May 8 00:13:07.100404 kubelet[3169]: I0508 00:13:07.100329 3169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:13:07.106993 kubelet[3169]: I0508 00:13:07.106963 3169 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:13:07.107496 kubelet[3169]: I0508 00:13:07.107476 3169 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:13:07.108067 kubelet[3169]: I0508 00:13:07.108048 3169 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:13:07.108148 kubelet[3169]: I0508 00:13:07.108090 3169 server.go:1287] "Started kubelet" May 8 00:13:07.113831 kubelet[3169]: I0508 00:13:07.113303 3169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:13:07.127694 kubelet[3169]: I0508 00:13:07.127652 3169 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:13:07.131556 kubelet[3169]: I0508 00:13:07.131109 3169 server.go:490] "Adding debug handlers to kubelet server" May 8 00:13:07.139648 kubelet[3169]: I0508 00:13:07.138799 3169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:13:07.143122 kubelet[3169]: I0508 00:13:07.140646 3169 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:13:07.143122 kubelet[3169]: I0508 00:13:07.140743 3169 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:13:07.143122 kubelet[3169]: E0508 00:13:07.140791 3169 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:13:07.143122 kubelet[3169]: I0508 00:13:07.141151 3169 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:13:07.144692 kubelet[3169]: I0508 00:13:07.144669 3169 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:13:07.144840 kubelet[3169]: I0508 00:13:07.144825 3169 reconciler.go:26] "Reconciler: start to sync state" May 8 00:13:07.147904 kubelet[3169]: I0508 00:13:07.147866 3169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:13:07.151863 kubelet[3169]: I0508 00:13:07.151836 3169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:13:07.151969 kubelet[3169]: I0508 00:13:07.151871 3169 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:13:07.151969 kubelet[3169]: I0508 00:13:07.151892 3169 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:13:07.151969 kubelet[3169]: I0508 00:13:07.151901 3169 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:13:07.151969 kubelet[3169]: E0508 00:13:07.151948 3169 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:13:07.154530 kubelet[3169]: I0508 00:13:07.154512 3169 factory.go:221] Registration of the containerd container factory successfully May 8 00:13:07.154665 kubelet[3169]: I0508 00:13:07.154653 3169 factory.go:221] Registration of the systemd container factory successfully May 8 00:13:07.154955 kubelet[3169]: I0508 00:13:07.154933 3169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:13:07.201218 kubelet[3169]: I0508 00:13:07.201189 3169 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:13:07.201218 kubelet[3169]: I0508 00:13:07.201207 3169 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:13:07.201218 kubelet[3169]: I0508 00:13:07.201230 3169 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:07.201459 kubelet[3169]: I0508 00:13:07.201432 3169 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:13:07.201505 kubelet[3169]: I0508 00:13:07.201446 3169 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:13:07.201505 kubelet[3169]: I0508 00:13:07.201474 3169 policy_none.go:49] "None policy: Start" May 8 00:13:07.201505 kubelet[3169]: I0508 00:13:07.201488 3169 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:13:07.201505 kubelet[3169]: I0508 00:13:07.201502 3169 state_mem.go:35] "Initializing new in-memory state store" May 8 00:13:07.201663 kubelet[3169]: I0508 00:13:07.201648 3169 state_mem.go:75] "Updated machine memory state" May 8 00:13:07.210052 kubelet[3169]: I0508 00:13:07.209956 3169 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:13:07.210232 kubelet[3169]: I0508 00:13:07.210157 3169 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:13:07.210232 kubelet[3169]: I0508 00:13:07.210171 3169 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:13:07.211841 kubelet[3169]: I0508 00:13:07.210865 3169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:13:07.219695 kubelet[3169]: E0508 00:13:07.217236 3169 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:13:07.253381 kubelet[3169]: I0508 00:13:07.253270 3169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:07.257834 kubelet[3169]: I0508 00:13:07.257065 3169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-158" May 8 00:13:07.257834 kubelet[3169]: I0508 00:13:07.257447 3169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:07.327104 kubelet[3169]: I0508 00:13:07.326525 3169 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-158" May 8 00:13:07.336438 kubelet[3169]: I0508 00:13:07.336078 3169 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-16-158" May 8 00:13:07.336438 kubelet[3169]: I0508 00:13:07.336169 3169 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-16-158" May 8 00:13:07.346264 kubelet[3169]: I0508 00:13:07.345590 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:07.346264 kubelet[3169]: I0508 00:13:07.345730 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:07.346264 kubelet[3169]: I0508 00:13:07.345868 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:07.346264 kubelet[3169]: I0508 00:13:07.345898 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:07.346264 kubelet[3169]: I0508 00:13:07.346053 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c34d67027fd37ecbcf112474496d19a-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-158\" (UID: \"8c34d67027fd37ecbcf112474496d19a\") " pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:07.346566 kubelet[3169]: I0508 00:13:07.346105 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c34d67027fd37ecbcf112474496d19a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-158\" (UID: \"8c34d67027fd37ecbcf112474496d19a\") " pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:07.346566 kubelet[3169]: I0508 00:13:07.346136 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a5f7edb90150ece389355bb097a0e5b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-158\" (UID: \"5a5f7edb90150ece389355bb097a0e5b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-158" May 8 00:13:07.346566 kubelet[3169]: I0508 00:13:07.346298 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/649bdd8d92f0911de8a87c3838928e1c-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-158\" (UID: \"649bdd8d92f0911de8a87c3838928e1c\") " pod="kube-system/kube-scheduler-ip-172-31-16-158" May 8 00:13:07.346566 kubelet[3169]: I0508 00:13:07.346321 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c34d67027fd37ecbcf112474496d19a-ca-certs\") pod \"kube-apiserver-ip-172-31-16-158\" (UID: \"8c34d67027fd37ecbcf112474496d19a\") " pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:08.102032 kubelet[3169]: I0508 00:13:08.101991 3169 apiserver.go:52] "Watching apiserver" May 8 00:13:08.145651 kubelet[3169]: I0508 00:13:08.145597 3169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:13:08.192334 kubelet[3169]: I0508 00:13:08.191453 3169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:08.206764 kubelet[3169]: E0508 00:13:08.206593 3169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-158\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-158" May 8 00:13:08.247926 kubelet[3169]: I0508 00:13:08.247839 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-158" podStartSLOduration=1.247802824 podStartE2EDuration="1.247802824s" podCreationTimestamp="2025-05-08 00:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:08.237131438 +0000 UTC m=+1.233355323" watchObservedRunningTime="2025-05-08 00:13:08.247802824 +0000 UTC m=+1.244026711" May 8 00:13:08.261384 kubelet[3169]: I0508 00:13:08.260305 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-158" podStartSLOduration=1.260286689 podStartE2EDuration="1.260286689s" podCreationTimestamp="2025-05-08 00:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:08.248391128 +0000 UTC m=+1.244615065" watchObservedRunningTime="2025-05-08 00:13:08.260286689 +0000 UTC m=+1.256510573" May 8 00:13:12.709034 sudo[2245]: pam_unix(sudo:session): session closed for user root May 8 00:13:12.732265 sshd[2244]: Connection closed by 139.178.68.195 port 49194 May 8 00:13:12.733514 sshd-session[2242]: pam_unix(sshd:session): session closed for user core May 8 00:13:12.737031 systemd[1]: sshd@6-172.31.16.158:22-139.178.68.195:49194.service: Deactivated successfully. May 8 00:13:12.739537 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:13:12.739997 systemd[1]: session-7.scope: Consumed 4.432s CPU time, 147.4M memory peak. May 8 00:13:12.741244 systemd-logind[1899]: Session 7 logged out. Waiting for processes to exit. May 8 00:13:12.742336 systemd-logind[1899]: Removed session 7. May 8 00:13:13.262396 kubelet[3169]: I0508 00:13:13.262271 3169 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:13:13.264034 containerd[1915]: time="2025-05-08T00:13:13.264001190Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:13:13.265390 kubelet[3169]: I0508 00:13:13.265349 3169 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:13:13.399246 kubelet[3169]: I0508 00:13:13.399132 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-158" podStartSLOduration=6.399108669 podStartE2EDuration="6.399108669s" podCreationTimestamp="2025-05-08 00:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:08.261100464 +0000 UTC m=+1.257324352" watchObservedRunningTime="2025-05-08 00:13:13.399108669 +0000 UTC m=+6.395332557" May 8 00:13:13.416266 systemd[1]: Created slice kubepods-besteffort-pod43a3a63b_78e9_4b73_9303_886ee46fbbe6.slice - libcontainer container kubepods-besteffort-pod43a3a63b_78e9_4b73_9303_886ee46fbbe6.slice. May 8 00:13:13.498064 kubelet[3169]: I0508 00:13:13.498024 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43a3a63b-78e9-4b73-9303-886ee46fbbe6-xtables-lock\") pod \"kube-proxy-jvdp6\" (UID: \"43a3a63b-78e9-4b73-9303-886ee46fbbe6\") " pod="kube-system/kube-proxy-jvdp6" May 8 00:13:13.498064 kubelet[3169]: I0508 00:13:13.498072 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43a3a63b-78e9-4b73-9303-886ee46fbbe6-kube-proxy\") pod \"kube-proxy-jvdp6\" (UID: \"43a3a63b-78e9-4b73-9303-886ee46fbbe6\") " pod="kube-system/kube-proxy-jvdp6" May 8 00:13:13.498064 kubelet[3169]: I0508 00:13:13.498097 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43a3a63b-78e9-4b73-9303-886ee46fbbe6-lib-modules\") pod \"kube-proxy-jvdp6\" (UID: \"43a3a63b-78e9-4b73-9303-886ee46fbbe6\") " pod="kube-system/kube-proxy-jvdp6" May 8 00:13:13.498064 kubelet[3169]: I0508 00:13:13.498121 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7x2h\" (UniqueName: \"kubernetes.io/projected/43a3a63b-78e9-4b73-9303-886ee46fbbe6-kube-api-access-f7x2h\") pod \"kube-proxy-jvdp6\" (UID: \"43a3a63b-78e9-4b73-9303-886ee46fbbe6\") " pod="kube-system/kube-proxy-jvdp6" May 8 00:13:13.613244 kubelet[3169]: E0508 00:13:13.609120 3169 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:13:13.613244 kubelet[3169]: E0508 00:13:13.609165 3169 projected.go:194] Error preparing data for projected volume kube-api-access-f7x2h for pod kube-system/kube-proxy-jvdp6: configmap "kube-root-ca.crt" not found May 8 00:13:13.613244 kubelet[3169]: E0508 00:13:13.609230 3169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a3a63b-78e9-4b73-9303-886ee46fbbe6-kube-api-access-f7x2h podName:43a3a63b-78e9-4b73-9303-886ee46fbbe6 nodeName:}" failed. No retries permitted until 2025-05-08 00:13:14.109211588 +0000 UTC m=+7.105435455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f7x2h" (UniqueName: "kubernetes.io/projected/43a3a63b-78e9-4b73-9303-886ee46fbbe6-kube-api-access-f7x2h") pod "kube-proxy-jvdp6" (UID: "43a3a63b-78e9-4b73-9303-886ee46fbbe6") : configmap "kube-root-ca.crt" not found May 8 00:13:14.190318 systemd[1]: Created slice kubepods-besteffort-pod3f9ca86e_a420_400f_a6e4_0f489cc03102.slice - libcontainer container kubepods-besteffort-pod3f9ca86e_a420_400f_a6e4_0f489cc03102.slice. May 8 00:13:14.204535 kubelet[3169]: I0508 00:13:14.203982 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f9ca86e-a420-400f-a6e4-0f489cc03102-var-lib-calico\") pod \"tigera-operator-789496d6f5-k28qn\" (UID: \"3f9ca86e-a420-400f-a6e4-0f489cc03102\") " pod="tigera-operator/tigera-operator-789496d6f5-k28qn" May 8 00:13:14.204535 kubelet[3169]: I0508 00:13:14.204030 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4k2x\" (UniqueName: \"kubernetes.io/projected/3f9ca86e-a420-400f-a6e4-0f489cc03102-kube-api-access-m4k2x\") pod \"tigera-operator-789496d6f5-k28qn\" (UID: \"3f9ca86e-a420-400f-a6e4-0f489cc03102\") " pod="tigera-operator/tigera-operator-789496d6f5-k28qn" May 8 00:13:14.326000 containerd[1915]: time="2025-05-08T00:13:14.325955405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvdp6,Uid:43a3a63b-78e9-4b73-9303-886ee46fbbe6,Namespace:kube-system,Attempt:0,}" May 8 00:13:14.359352 containerd[1915]: time="2025-05-08T00:13:14.359023826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:14.360062 containerd[1915]: time="2025-05-08T00:13:14.360007864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:14.360147 containerd[1915]: time="2025-05-08T00:13:14.360043657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:14.360147 containerd[1915]: time="2025-05-08T00:13:14.360134217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:14.382997 systemd[1]: Started cri-containerd-a4768bc000fd1d6b46998bbb3f474399241aa9dcae95f7595e3c9f28527921ea.scope - libcontainer container a4768bc000fd1d6b46998bbb3f474399241aa9dcae95f7595e3c9f28527921ea. May 8 00:13:14.408025 containerd[1915]: time="2025-05-08T00:13:14.407992633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvdp6,Uid:43a3a63b-78e9-4b73-9303-886ee46fbbe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4768bc000fd1d6b46998bbb3f474399241aa9dcae95f7595e3c9f28527921ea\"" May 8 00:13:14.412761 containerd[1915]: time="2025-05-08T00:13:14.412724255Z" level=info msg="CreateContainer within sandbox \"a4768bc000fd1d6b46998bbb3f474399241aa9dcae95f7595e3c9f28527921ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:13:14.430115 containerd[1915]: time="2025-05-08T00:13:14.430000232Z" level=info msg="CreateContainer within sandbox \"a4768bc000fd1d6b46998bbb3f474399241aa9dcae95f7595e3c9f28527921ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9663cc2af3986c193e1d97008f37cf36ed09b65357ff92cc1fdb52aa673f8323\"" May 8 00:13:14.430839 containerd[1915]: time="2025-05-08T00:13:14.430610586Z" level=info msg="StartContainer for \"9663cc2af3986c193e1d97008f37cf36ed09b65357ff92cc1fdb52aa673f8323\"" May 8 00:13:14.461032 systemd[1]: Started cri-containerd-9663cc2af3986c193e1d97008f37cf36ed09b65357ff92cc1fdb52aa673f8323.scope - libcontainer container 9663cc2af3986c193e1d97008f37cf36ed09b65357ff92cc1fdb52aa673f8323. May 8 00:13:14.493402 containerd[1915]: time="2025-05-08T00:13:14.493367514Z" level=info msg="StartContainer for \"9663cc2af3986c193e1d97008f37cf36ed09b65357ff92cc1fdb52aa673f8323\" returns successfully" May 8 00:13:14.495505 containerd[1915]: time="2025-05-08T00:13:14.495470166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-k28qn,Uid:3f9ca86e-a420-400f-a6e4-0f489cc03102,Namespace:tigera-operator,Attempt:0,}" May 8 00:13:14.524026 containerd[1915]: time="2025-05-08T00:13:14.523580699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:14.524026 containerd[1915]: time="2025-05-08T00:13:14.523641370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:14.524026 containerd[1915]: time="2025-05-08T00:13:14.523655700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:14.524026 containerd[1915]: time="2025-05-08T00:13:14.523740411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:14.546036 systemd[1]: Started cri-containerd-ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae.scope - libcontainer container ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae. May 8 00:13:14.589841 containerd[1915]: time="2025-05-08T00:13:14.589682494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-k28qn,Uid:3f9ca86e-a420-400f-a6e4-0f489cc03102,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae\"" May 8 00:13:14.593439 containerd[1915]: time="2025-05-08T00:13:14.592792364Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:13:15.360776 kubelet[3169]: I0508 00:13:15.360641 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jvdp6" podStartSLOduration=2.360621926 podStartE2EDuration="2.360621926s" podCreationTimestamp="2025-05-08 00:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:15.223454751 +0000 UTC m=+8.219678637" watchObservedRunningTime="2025-05-08 00:13:15.360621926 +0000 UTC m=+8.356845794" May 8 00:13:16.499387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861195503.mount: Deactivated successfully. May 8 00:13:17.165934 containerd[1915]: time="2025-05-08T00:13:17.165838916Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:17.167139 containerd[1915]: time="2025-05-08T00:13:17.167083519Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:13:17.168074 containerd[1915]: time="2025-05-08T00:13:17.168019078Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:17.172592 containerd[1915]: time="2025-05-08T00:13:17.170897381Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:17.172592 containerd[1915]: time="2025-05-08T00:13:17.171604014Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.578752355s" May 8 00:13:17.172592 containerd[1915]: time="2025-05-08T00:13:17.171631369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:13:17.219073 containerd[1915]: time="2025-05-08T00:13:17.218822669Z" level=info msg="CreateContainer within sandbox \"ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:13:17.242083 containerd[1915]: time="2025-05-08T00:13:17.242036862Z" level=info msg="CreateContainer within sandbox \"ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496\"" May 8 00:13:17.244579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393697461.mount: Deactivated successfully. May 8 00:13:17.248846 containerd[1915]: time="2025-05-08T00:13:17.247923946Z" level=info msg="StartContainer for \"466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496\"" May 8 00:13:17.318080 systemd[1]: Started cri-containerd-466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496.scope - libcontainer container 466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496. May 8 00:13:17.350580 containerd[1915]: time="2025-05-08T00:13:17.350526653Z" level=info msg="StartContainer for \"466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496\" returns successfully" May 8 00:13:19.112257 update_engine[1900]: I20250508 00:13:19.112186 1900 update_attempter.cc:509] Updating boot flags... May 8 00:13:19.199851 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3558) May 8 00:13:19.331034 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3561) May 8 00:13:41.412240 systemd[1]: Started sshd@7-172.31.16.158:22-139.178.68.195:51076.service - OpenSSH per-connection server daemon (139.178.68.195:51076). May 8 00:13:41.586030 sshd[3731]: Accepted publickey for core from 139.178.68.195 port 51076 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:13:41.587377 sshd-session[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:41.596876 systemd-logind[1899]: New session 8 of user core. May 8 00:13:41.608078 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:13:41.812842 sshd[3733]: Connection closed by 139.178.68.195 port 51076 May 8 00:13:41.814153 sshd-session[3731]: pam_unix(sshd:session): session closed for user core May 8 00:13:41.817073 systemd[1]: sshd@7-172.31.16.158:22-139.178.68.195:51076.service: Deactivated successfully. May 8 00:13:41.819102 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:13:41.820486 systemd-logind[1899]: Session 8 logged out. Waiting for processes to exit. May 8 00:13:41.821624 systemd-logind[1899]: Removed session 8. May 8 00:13:46.851110 systemd[1]: Started sshd@8-172.31.16.158:22-139.178.68.195:39674.service - OpenSSH per-connection server daemon (139.178.68.195:39674). May 8 00:13:47.018530 sshd[3748]: Accepted publickey for core from 139.178.68.195 port 39674 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:13:47.019146 sshd-session[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:47.024017 systemd-logind[1899]: New session 9 of user core. May 8 00:13:47.030007 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:13:47.230029 sshd[3750]: Connection closed by 139.178.68.195 port 39674 May 8 00:13:47.230957 sshd-session[3748]: pam_unix(sshd:session): session closed for user core May 8 00:13:47.233772 systemd[1]: sshd@8-172.31.16.158:22-139.178.68.195:39674.service: Deactivated successfully. May 8 00:13:47.235508 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:13:47.236799 systemd-logind[1899]: Session 9 logged out. Waiting for processes to exit. May 8 00:13:47.238039 systemd-logind[1899]: Removed session 9. May 8 00:13:50.625111 kubelet[3169]: I0508 00:13:50.624568 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-k28qn" podStartSLOduration=34.031295254 podStartE2EDuration="36.624547491s" podCreationTimestamp="2025-05-08 00:13:14 +0000 UTC" firstStartedPulling="2025-05-08 00:13:14.592129764 +0000 UTC m=+7.588353646" lastFinishedPulling="2025-05-08 00:13:17.185382004 +0000 UTC m=+10.181605883" observedRunningTime="2025-05-08 00:13:18.260147715 +0000 UTC m=+11.256371603" watchObservedRunningTime="2025-05-08 00:13:50.624547491 +0000 UTC m=+43.620771376" May 8 00:13:50.638825 systemd[1]: Created slice kubepods-besteffort-pod4b0e2cd4_769b_4e5f_9e77_ab11e14d99c7.slice - libcontainer container kubepods-besteffort-pod4b0e2cd4_769b_4e5f_9e77_ab11e14d99c7.slice. May 8 00:13:50.753826 kubelet[3169]: I0508 00:13:50.753260 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-lib-calico\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.753826 kubelet[3169]: I0508 00:13:50.753313 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-net-dir\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.753826 kubelet[3169]: I0508 00:13:50.753339 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-xtables-lock\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.753826 kubelet[3169]: I0508 00:13:50.753371 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-policysync\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.753826 kubelet[3169]: I0508 00:13:50.753394 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-node-certs\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754164 kubelet[3169]: I0508 00:13:50.753417 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-run-calico\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754164 kubelet[3169]: I0508 00:13:50.753445 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-lib-modules\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754164 kubelet[3169]: I0508 00:13:50.753465 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-log-dir\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754164 kubelet[3169]: I0508 00:13:50.753488 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-flexvol-driver-host\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754164 kubelet[3169]: I0508 00:13:50.753514 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-tigera-ca-bundle\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754379 kubelet[3169]: I0508 00:13:50.753542 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-bin-dir\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.754379 kubelet[3169]: I0508 00:13:50.753563 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp84t\" (UniqueName: \"kubernetes.io/projected/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-kube-api-access-cp84t\") pod \"calico-node-fq75b\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " pod="calico-system/calico-node-fq75b" May 8 00:13:50.763244 kubelet[3169]: E0508 00:13:50.763185 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:13:50.855647 kubelet[3169]: I0508 00:13:50.854631 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce29167f-2f9b-4aa2-9647-1f758fb55a45-registration-dir\") pod \"csi-node-driver-znh85\" (UID: \"ce29167f-2f9b-4aa2-9647-1f758fb55a45\") " pod="calico-system/csi-node-driver-znh85" May 8 00:13:50.855647 kubelet[3169]: I0508 00:13:50.854692 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99smq\" (UniqueName: \"kubernetes.io/projected/ce29167f-2f9b-4aa2-9647-1f758fb55a45-kube-api-access-99smq\") pod \"csi-node-driver-znh85\" (UID: \"ce29167f-2f9b-4aa2-9647-1f758fb55a45\") " pod="calico-system/csi-node-driver-znh85" May 8 00:13:50.855647 kubelet[3169]: I0508 00:13:50.854771 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce29167f-2f9b-4aa2-9647-1f758fb55a45-socket-dir\") pod \"csi-node-driver-znh85\" (UID: \"ce29167f-2f9b-4aa2-9647-1f758fb55a45\") " pod="calico-system/csi-node-driver-znh85" May 8 00:13:50.855647 kubelet[3169]: I0508 00:13:50.854908 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce29167f-2f9b-4aa2-9647-1f758fb55a45-kubelet-dir\") pod \"csi-node-driver-znh85\" (UID: \"ce29167f-2f9b-4aa2-9647-1f758fb55a45\") " pod="calico-system/csi-node-driver-znh85" May 8 00:13:50.855647 kubelet[3169]: I0508 00:13:50.854928 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce29167f-2f9b-4aa2-9647-1f758fb55a45-varrun\") pod \"csi-node-driver-znh85\" (UID: \"ce29167f-2f9b-4aa2-9647-1f758fb55a45\") " pod="calico-system/csi-node-driver-znh85" May 8 00:13:50.863866 kubelet[3169]: E0508 00:13:50.863822 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.864164 kubelet[3169]: W0508 00:13:50.864111 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.864379 kubelet[3169]: E0508 00:13:50.864352 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.865320 kubelet[3169]: E0508 00:13:50.865301 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.865546 kubelet[3169]: W0508 00:13:50.865526 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.865729 kubelet[3169]: E0508 00:13:50.865713 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.893526 kubelet[3169]: E0508 00:13:50.893391 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.893526 kubelet[3169]: W0508 00:13:50.893417 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.893784 kubelet[3169]: E0508 00:13:50.893738 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.945331 containerd[1915]: time="2025-05-08T00:13:50.944884159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fq75b,Uid:4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7,Namespace:calico-system,Attempt:0,}" May 8 00:13:50.956719 kubelet[3169]: E0508 00:13:50.956675 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.956983 kubelet[3169]: W0508 00:13:50.956917 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.956983 kubelet[3169]: E0508 00:13:50.956943 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.958610 kubelet[3169]: E0508 00:13:50.958366 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.958610 kubelet[3169]: W0508 00:13:50.958389 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.960294 kubelet[3169]: E0508 00:13:50.958883 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.960294 kubelet[3169]: E0508 00:13:50.960144 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.960515 kubelet[3169]: W0508 00:13:50.960468 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.961005 kubelet[3169]: E0508 00:13:50.960637 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.961265 kubelet[3169]: E0508 00:13:50.961251 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.961426 kubelet[3169]: W0508 00:13:50.961385 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.961583 kubelet[3169]: E0508 00:13:50.961543 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.962477 kubelet[3169]: E0508 00:13:50.962462 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.962477 kubelet[3169]: W0508 00:13:50.962508 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.962740 kubelet[3169]: E0508 00:13:50.962604 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.964475 kubelet[3169]: E0508 00:13:50.964244 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.964475 kubelet[3169]: W0508 00:13:50.964261 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.964475 kubelet[3169]: E0508 00:13:50.964350 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.966078 kubelet[3169]: E0508 00:13:50.964777 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.966078 kubelet[3169]: W0508 00:13:50.964789 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.966078 kubelet[3169]: E0508 00:13:50.965015 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.967198 kubelet[3169]: E0508 00:13:50.966998 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.967198 kubelet[3169]: W0508 00:13:50.967015 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.967198 kubelet[3169]: E0508 00:13:50.967117 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.968147 kubelet[3169]: E0508 00:13:50.968134 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.968414 kubelet[3169]: W0508 00:13:50.968340 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.970733 kubelet[3169]: E0508 00:13:50.970177 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.970733 kubelet[3169]: E0508 00:13:50.970445 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.970733 kubelet[3169]: W0508 00:13:50.970456 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.971848 kubelet[3169]: E0508 00:13:50.971613 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.972130 kubelet[3169]: E0508 00:13:50.972116 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.972500 kubelet[3169]: W0508 00:13:50.972420 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.973557 kubelet[3169]: E0508 00:13:50.973334 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.973988 kubelet[3169]: E0508 00:13:50.973756 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.973988 kubelet[3169]: W0508 00:13:50.973769 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.974751 kubelet[3169]: E0508 00:13:50.974658 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.976163 kubelet[3169]: E0508 00:13:50.976029 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.976163 kubelet[3169]: W0508 00:13:50.976046 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.976641 kubelet[3169]: E0508 00:13:50.976624 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.977550 kubelet[3169]: E0508 00:13:50.977475 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.977550 kubelet[3169]: W0508 00:13:50.977491 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.978238 kubelet[3169]: E0508 00:13:50.978074 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.979010 kubelet[3169]: E0508 00:13:50.978903 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.979010 kubelet[3169]: W0508 00:13:50.978918 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.979542 kubelet[3169]: E0508 00:13:50.979387 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.980665 kubelet[3169]: E0508 00:13:50.980127 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.981034 kubelet[3169]: W0508 00:13:50.980142 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.982678 kubelet[3169]: E0508 00:13:50.982293 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.983269 kubelet[3169]: E0508 00:13:50.983052 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.983581 kubelet[3169]: W0508 00:13:50.983391 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.984962 kubelet[3169]: E0508 00:13:50.984520 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.984962 kubelet[3169]: E0508 00:13:50.984715 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.984962 kubelet[3169]: W0508 00:13:50.984727 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.984962 kubelet[3169]: E0508 00:13:50.984852 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.985501 kubelet[3169]: E0508 00:13:50.985404 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.985694 kubelet[3169]: W0508 00:13:50.985654 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.986065 kubelet[3169]: E0508 00:13:50.985932 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.987641 kubelet[3169]: E0508 00:13:50.987282 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.987641 kubelet[3169]: W0508 00:13:50.987300 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.988440 kubelet[3169]: E0508 00:13:50.988310 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.989006 kubelet[3169]: E0508 00:13:50.988991 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.989508 kubelet[3169]: W0508 00:13:50.989360 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.989897 kubelet[3169]: E0508 00:13:50.989836 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.990526 kubelet[3169]: E0508 00:13:50.989991 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.990526 kubelet[3169]: W0508 00:13:50.990005 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.990526 kubelet[3169]: E0508 00:13:50.990431 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.990723 kubelet[3169]: E0508 00:13:50.990713 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.990769 kubelet[3169]: W0508 00:13:50.990761 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.991303 kubelet[3169]: E0508 00:13:50.991233 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.993170 kubelet[3169]: E0508 00:13:50.993154 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.993848 kubelet[3169]: W0508 00:13:50.993344 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.993848 kubelet[3169]: E0508 00:13:50.993433 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:50.994496 kubelet[3169]: E0508 00:13:50.994482 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:50.994589 kubelet[3169]: W0508 00:13:50.994576 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:50.994667 kubelet[3169]: E0508 00:13:50.994655 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:51.005453 containerd[1915]: time="2025-05-08T00:13:51.005195462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:51.005912 containerd[1915]: time="2025-05-08T00:13:51.005659963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:51.006238 containerd[1915]: time="2025-05-08T00:13:51.005759159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.006529 containerd[1915]: time="2025-05-08T00:13:51.006342241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.027949 kubelet[3169]: E0508 00:13:51.025321 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:51.027949 kubelet[3169]: W0508 00:13:51.025348 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:51.027949 kubelet[3169]: E0508 00:13:51.025382 3169 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:51.049098 systemd[1]: Started cri-containerd-e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d.scope - libcontainer container e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d. May 8 00:13:51.092445 containerd[1915]: time="2025-05-08T00:13:51.092396707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fq75b,Uid:4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\"" May 8 00:13:51.096144 containerd[1915]: time="2025-05-08T00:13:51.096094943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:13:52.274150 systemd[1]: Started sshd@9-172.31.16.158:22-139.178.68.195:39684.service - OpenSSH per-connection server daemon (139.178.68.195:39684). May 8 00:13:52.447878 sshd[3848]: Accepted publickey for core from 139.178.68.195 port 39684 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:13:52.450288 sshd-session[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:52.455174 systemd-logind[1899]: New session 10 of user core. May 8 00:13:52.461076 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:13:52.727091 sshd[3850]: Connection closed by 139.178.68.195 port 39684 May 8 00:13:52.729720 sshd-session[3848]: pam_unix(sshd:session): session closed for user core May 8 00:13:52.736656 systemd-logind[1899]: Session 10 logged out. Waiting for processes to exit. May 8 00:13:52.737483 systemd[1]: sshd@9-172.31.16.158:22-139.178.68.195:39684.service: Deactivated successfully. May 8 00:13:52.740748 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:13:52.743269 systemd-logind[1899]: Removed session 10. May 8 00:13:52.834327 containerd[1915]: time="2025-05-08T00:13:52.834270234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:52.835395 containerd[1915]: time="2025-05-08T00:13:52.835263339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:13:52.836610 containerd[1915]: time="2025-05-08T00:13:52.836575501Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:52.840201 containerd[1915]: time="2025-05-08T00:13:52.839551047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:52.840201 containerd[1915]: time="2025-05-08T00:13:52.840074905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.743641834s" May 8 00:13:52.840201 containerd[1915]: time="2025-05-08T00:13:52.840106684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:13:52.843419 containerd[1915]: time="2025-05-08T00:13:52.843155020Z" level=info msg="CreateContainer within sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:13:52.877730 containerd[1915]: time="2025-05-08T00:13:52.877674323Z" level=info msg="CreateContainer within sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\"" May 8 00:13:52.878364 containerd[1915]: time="2025-05-08T00:13:52.878293956Z" level=info msg="StartContainer for \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\"" May 8 00:13:52.924061 systemd[1]: Started cri-containerd-14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c.scope - libcontainer container 14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c. May 8 00:13:52.957037 containerd[1915]: time="2025-05-08T00:13:52.956990680Z" level=info msg="StartContainer for \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\" returns successfully" May 8 00:13:52.975644 systemd[1]: cri-containerd-14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c.scope: Deactivated successfully. May 8 00:13:53.003605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c-rootfs.mount: Deactivated successfully. May 8 00:13:53.062026 containerd[1915]: time="2025-05-08T00:13:53.047091159Z" level=info msg="shim disconnected" id=14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c namespace=k8s.io May 8 00:13:53.062026 containerd[1915]: time="2025-05-08T00:13:53.061920011Z" level=warning msg="cleaning up after shim disconnected" id=14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c namespace=k8s.io May 8 00:13:53.062026 containerd[1915]: time="2025-05-08T00:13:53.061940025Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:13:53.153172 kubelet[3169]: E0508 00:13:53.152673 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:13:53.312161 containerd[1915]: time="2025-05-08T00:13:53.312013772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:13:55.153611 kubelet[3169]: E0508 00:13:55.153550 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:13:57.153404 kubelet[3169]: E0508 00:13:57.153355 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:13:57.766167 systemd[1]: Started sshd@10-172.31.16.158:22-139.178.68.195:46788.service - OpenSSH per-connection server daemon (139.178.68.195:46788). May 8 00:13:57.984887 sshd[3942]: Accepted publickey for core from 139.178.68.195 port 46788 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:13:57.987771 sshd-session[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:57.995748 systemd-logind[1899]: New session 11 of user core. May 8 00:13:58.002090 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:13:58.318704 sshd[3944]: Connection closed by 139.178.68.195 port 46788 May 8 00:13:58.319730 sshd-session[3942]: pam_unix(sshd:session): session closed for user core May 8 00:13:58.327873 systemd-logind[1899]: Session 11 logged out. Waiting for processes to exit. May 8 00:13:58.328343 systemd[1]: sshd@10-172.31.16.158:22-139.178.68.195:46788.service: Deactivated successfully. May 8 00:13:58.330923 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:13:58.335114 systemd-logind[1899]: Removed session 11. May 8 00:13:58.436995 containerd[1915]: time="2025-05-08T00:13:58.436945163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:58.438300 containerd[1915]: time="2025-05-08T00:13:58.438224020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:13:58.439741 containerd[1915]: time="2025-05-08T00:13:58.439683746Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:58.442234 containerd[1915]: time="2025-05-08T00:13:58.442177911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:58.443401 containerd[1915]: time="2025-05-08T00:13:58.442751681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.130677467s" May 8 00:13:58.443401 containerd[1915]: time="2025-05-08T00:13:58.442782763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:13:58.445474 containerd[1915]: time="2025-05-08T00:13:58.445441962Z" level=info msg="CreateContainer within sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:13:58.466159 containerd[1915]: time="2025-05-08T00:13:58.466120410Z" level=info msg="CreateContainer within sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\"" May 8 00:13:58.466828 containerd[1915]: time="2025-05-08T00:13:58.466788857Z" level=info msg="StartContainer for \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\"" May 8 00:13:58.563114 systemd[1]: Started cri-containerd-c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab.scope - libcontainer container c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab. May 8 00:13:58.643053 containerd[1915]: time="2025-05-08T00:13:58.638460901Z" level=info msg="StartContainer for \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\" returns successfully" May 8 00:13:59.154466 kubelet[3169]: E0508 00:13:59.152974 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:00.028169 systemd[1]: cri-containerd-c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab.scope: Deactivated successfully. May 8 00:14:00.028880 systemd[1]: cri-containerd-c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab.scope: Consumed 594ms CPU time, 150.8M memory peak, 5.6M read from disk, 154M written to disk. May 8 00:14:00.155540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab-rootfs.mount: Deactivated successfully. May 8 00:14:00.197696 kubelet[3169]: I0508 00:14:00.197495 3169 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:14:00.263370 containerd[1915]: time="2025-05-08T00:14:00.263310259Z" level=info msg="shim disconnected" id=c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab namespace=k8s.io May 8 00:14:00.263785 containerd[1915]: time="2025-05-08T00:14:00.263376447Z" level=warning msg="cleaning up after shim disconnected" id=c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab namespace=k8s.io May 8 00:14:00.263785 containerd[1915]: time="2025-05-08T00:14:00.263386533Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:00.356050 containerd[1915]: time="2025-05-08T00:14:00.354548242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:14:01.160721 systemd[1]: Created slice kubepods-besteffort-podce29167f_2f9b_4aa2_9647_1f758fb55a45.slice - libcontainer container kubepods-besteffort-podce29167f_2f9b_4aa2_9647_1f758fb55a45.slice. May 8 00:14:01.164140 containerd[1915]: time="2025-05-08T00:14:01.164098689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:0,}" May 8 00:14:03.292117 containerd[1915]: time="2025-05-08T00:14:03.292055073Z" level=error msg="Failed to destroy network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.295299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8-shm.mount: Deactivated successfully. May 8 00:14:03.304037 containerd[1915]: time="2025-05-08T00:14:03.303976844Z" level=error msg="encountered an error cleaning up failed sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.304298 containerd[1915]: time="2025-05-08T00:14:03.304271487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.305992 kubelet[3169]: E0508 00:14:03.305945 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.309419 kubelet[3169]: E0508 00:14:03.306026 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:03.309419 kubelet[3169]: E0508 00:14:03.306056 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:03.309419 kubelet[3169]: E0508 00:14:03.306134 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:03.351956 systemd[1]: Started sshd@11-172.31.16.158:22-139.178.68.195:46790.service - OpenSSH per-connection server daemon (139.178.68.195:46790). May 8 00:14:03.376405 kubelet[3169]: I0508 00:14:03.376308 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8" May 8 00:14:03.379774 containerd[1915]: time="2025-05-08T00:14:03.379707489Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:03.402241 containerd[1915]: time="2025-05-08T00:14:03.402184540Z" level=info msg="Ensure that sandbox 428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8 in task-service has been cleanup successfully" May 8 00:14:03.404494 systemd[1]: run-netns-cni\x2d205195fd\x2de860\x2d3972\x2d92a2\x2d64dcb01dbeb9.mount: Deactivated successfully. May 8 00:14:03.405646 containerd[1915]: time="2025-05-08T00:14:03.405599884Z" level=info msg="TearDown network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" successfully" May 8 00:14:03.405824 containerd[1915]: time="2025-05-08T00:14:03.405795749Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" returns successfully" May 8 00:14:03.410120 containerd[1915]: time="2025-05-08T00:14:03.409730468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:1,}" May 8 00:14:03.628119 sshd[4053]: Accepted publickey for core from 139.178.68.195 port 46790 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:03.630485 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:03.636457 systemd-logind[1899]: New session 12 of user core. May 8 00:14:03.644145 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:14:03.723918 containerd[1915]: time="2025-05-08T00:14:03.723861087Z" level=error msg="Failed to destroy network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.729455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97-shm.mount: Deactivated successfully. May 8 00:14:03.730207 containerd[1915]: time="2025-05-08T00:14:03.724321049Z" level=error msg="encountered an error cleaning up failed sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.730207 containerd[1915]: time="2025-05-08T00:14:03.729947255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.730495 kubelet[3169]: E0508 00:14:03.730465 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:03.730774 kubelet[3169]: E0508 00:14:03.730745 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:03.730885 kubelet[3169]: E0508 00:14:03.730869 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:03.730995 kubelet[3169]: E0508 00:14:03.730975 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:03.915097 sshd[4066]: Connection closed by 139.178.68.195 port 46790 May 8 00:14:03.916094 sshd-session[4053]: pam_unix(sshd:session): session closed for user core May 8 00:14:03.923666 systemd[1]: sshd@11-172.31.16.158:22-139.178.68.195:46790.service: Deactivated successfully. May 8 00:14:03.927175 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:14:03.928477 systemd-logind[1899]: Session 12 logged out. Waiting for processes to exit. May 8 00:14:03.930516 systemd-logind[1899]: Removed session 12. May 8 00:14:03.957134 systemd[1]: Started sshd@12-172.31.16.158:22-139.178.68.195:46804.service - OpenSSH per-connection server daemon (139.178.68.195:46804). May 8 00:14:04.131213 sshd[4099]: Accepted publickey for core from 139.178.68.195 port 46804 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:04.134144 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:04.141465 systemd-logind[1899]: New session 13 of user core. May 8 00:14:04.148479 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:14:04.387064 kubelet[3169]: I0508 00:14:04.386922 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97" May 8 00:14:04.389926 containerd[1915]: time="2025-05-08T00:14:04.389887682Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" May 8 00:14:04.390365 containerd[1915]: time="2025-05-08T00:14:04.390169702Z" level=info msg="Ensure that sandbox e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97 in task-service has been cleanup successfully" May 8 00:14:04.396848 systemd[1]: run-netns-cni\x2d3bf8d17c\x2d1d2f\x2d6c95\x2db658\x2d9699ba56da11.mount: Deactivated successfully. May 8 00:14:04.399362 containerd[1915]: time="2025-05-08T00:14:04.399324613Z" level=info msg="TearDown network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" successfully" May 8 00:14:04.399362 containerd[1915]: time="2025-05-08T00:14:04.399357916Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" returns successfully" May 8 00:14:04.402155 containerd[1915]: time="2025-05-08T00:14:04.402122501Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:04.402280 containerd[1915]: time="2025-05-08T00:14:04.402248808Z" level=info msg="TearDown network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" successfully" May 8 00:14:04.402280 containerd[1915]: time="2025-05-08T00:14:04.402264819Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" returns successfully" May 8 00:14:04.407953 containerd[1915]: time="2025-05-08T00:14:04.407912234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:2,}" May 8 00:14:04.591961 sshd[4105]: Connection closed by 139.178.68.195 port 46804 May 8 00:14:04.597002 sshd-session[4099]: pam_unix(sshd:session): session closed for user core May 8 00:14:04.605918 systemd[1]: sshd@12-172.31.16.158:22-139.178.68.195:46804.service: Deactivated successfully. May 8 00:14:04.612846 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:14:04.619163 systemd-logind[1899]: Session 13 logged out. Waiting for processes to exit. May 8 00:14:04.643983 systemd[1]: Started sshd@13-172.31.16.158:22-139.178.68.195:46820.service - OpenSSH per-connection server daemon (139.178.68.195:46820). May 8 00:14:04.644899 systemd-logind[1899]: Removed session 13. May 8 00:14:04.689723 containerd[1915]: time="2025-05-08T00:14:04.688985187Z" level=error msg="Failed to destroy network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:04.694403 containerd[1915]: time="2025-05-08T00:14:04.694349838Z" level=error msg="encountered an error cleaning up failed sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:04.694541 containerd[1915]: time="2025-05-08T00:14:04.694438052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:04.696716 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0-shm.mount: Deactivated successfully. May 8 00:14:04.699094 kubelet[3169]: E0508 00:14:04.699026 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:04.699094 kubelet[3169]: E0508 00:14:04.699090 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:04.699313 kubelet[3169]: E0508 00:14:04.699119 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:04.699313 kubelet[3169]: E0508 00:14:04.699175 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:04.898012 sshd[4142]: Accepted publickey for core from 139.178.68.195 port 46820 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:04.900186 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:04.911687 systemd-logind[1899]: New session 14 of user core. May 8 00:14:04.915262 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:14:05.198134 sshd[4146]: Connection closed by 139.178.68.195 port 46820 May 8 00:14:05.199053 sshd-session[4142]: pam_unix(sshd:session): session closed for user core May 8 00:14:05.210545 systemd[1]: sshd@13-172.31.16.158:22-139.178.68.195:46820.service: Deactivated successfully. May 8 00:14:05.214255 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:14:05.218793 systemd-logind[1899]: Session 14 logged out. Waiting for processes to exit. May 8 00:14:05.221317 systemd-logind[1899]: Removed session 14. May 8 00:14:05.394843 kubelet[3169]: I0508 00:14:05.394564 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0" May 8 00:14:05.395754 containerd[1915]: time="2025-05-08T00:14:05.395621642Z" level=info msg="StopPodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\"" May 8 00:14:05.397698 containerd[1915]: time="2025-05-08T00:14:05.397474475Z" level=info msg="Ensure that sandbox 559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0 in task-service has been cleanup successfully" May 8 00:14:05.397994 containerd[1915]: time="2025-05-08T00:14:05.397851054Z" level=info msg="TearDown network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" successfully" May 8 00:14:05.397994 containerd[1915]: time="2025-05-08T00:14:05.397968505Z" level=info msg="StopPodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" returns successfully" May 8 00:14:05.400459 containerd[1915]: time="2025-05-08T00:14:05.400429764Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" May 8 00:14:05.404261 containerd[1915]: time="2025-05-08T00:14:05.400541861Z" level=info msg="TearDown network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" successfully" May 8 00:14:05.404261 containerd[1915]: time="2025-05-08T00:14:05.400560992Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" returns successfully" May 8 00:14:05.402455 systemd[1]: run-netns-cni\x2d5d427119\x2d9b80\x2dec0d\x2d0dce\x2dc18034ed3359.mount: Deactivated successfully. May 8 00:14:05.405001 containerd[1915]: time="2025-05-08T00:14:05.404804314Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:05.405078 containerd[1915]: time="2025-05-08T00:14:05.405034842Z" level=info msg="TearDown network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" successfully" May 8 00:14:05.405078 containerd[1915]: time="2025-05-08T00:14:05.405054106Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" returns successfully" May 8 00:14:05.406319 containerd[1915]: time="2025-05-08T00:14:05.406273893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:3,}" May 8 00:14:05.553986 containerd[1915]: time="2025-05-08T00:14:05.553937227Z" level=error msg="Failed to destroy network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:05.558574 containerd[1915]: time="2025-05-08T00:14:05.555386631Z" level=error msg="encountered an error cleaning up failed sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:05.562301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d-shm.mount: Deactivated successfully. May 8 00:14:05.574263 containerd[1915]: time="2025-05-08T00:14:05.574205322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:05.576722 kubelet[3169]: E0508 00:14:05.574616 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:05.576722 kubelet[3169]: E0508 00:14:05.574682 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:05.576722 kubelet[3169]: E0508 00:14:05.574721 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:05.577468 kubelet[3169]: E0508 00:14:05.574775 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:06.401384 kubelet[3169]: I0508 00:14:06.401351 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d" May 8 00:14:06.403743 containerd[1915]: time="2025-05-08T00:14:06.403704177Z" level=info msg="StopPodSandbox for \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\"" May 8 00:14:06.404270 containerd[1915]: time="2025-05-08T00:14:06.404135357Z" level=info msg="Ensure that sandbox f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d in task-service has been cleanup successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.405744743Z" level=info msg="TearDown network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\" successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.405767353Z" level=info msg="StopPodSandbox for \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\" returns successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.406592138Z" level=info msg="StopPodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\"" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.406772651Z" level=info msg="TearDown network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.406801267Z" level=info msg="StopPodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" returns successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.407162409Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.407261573Z" level=info msg="TearDown network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.407278125Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" returns successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.407533709Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.407624380Z" level=info msg="TearDown network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.407637835Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" returns successfully" May 8 00:14:06.430416 containerd[1915]: time="2025-05-08T00:14:06.408085486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:4,}" May 8 00:14:06.409981 systemd[1]: run-netns-cni\x2d45544129\x2d11c6\x2dc818\x2d7ced\x2d9874c08451b0.mount: Deactivated successfully. May 8 00:14:06.570797 containerd[1915]: time="2025-05-08T00:14:06.570739179Z" level=error msg="Failed to destroy network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:06.574354 containerd[1915]: time="2025-05-08T00:14:06.574137862Z" level=error msg="encountered an error cleaning up failed sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:06.574354 containerd[1915]: time="2025-05-08T00:14:06.574244257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:06.574981 kubelet[3169]: E0508 00:14:06.574726 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:06.574981 kubelet[3169]: E0508 00:14:06.574797 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:06.574981 kubelet[3169]: E0508 00:14:06.574848 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:06.574791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842-shm.mount: Deactivated successfully. May 8 00:14:06.578003 kubelet[3169]: E0508 00:14:06.577951 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:07.193556 containerd[1915]: time="2025-05-08T00:14:07.193504305Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:07.194448 containerd[1915]: time="2025-05-08T00:14:07.194423466Z" level=info msg="TearDown network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" successfully" May 8 00:14:07.195053 containerd[1915]: time="2025-05-08T00:14:07.195028070Z" level=info msg="StopPodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" returns successfully" May 8 00:14:07.212043 containerd[1915]: time="2025-05-08T00:14:07.211993644Z" level=info msg="RemovePodSandbox for \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:07.212430 containerd[1915]: time="2025-05-08T00:14:07.212304036Z" level=info msg="Forcibly stopping sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\"" May 8 00:14:07.212677 containerd[1915]: time="2025-05-08T00:14:07.212622921Z" level=info msg="TearDown network for sandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" successfully" May 8 00:14:07.266898 containerd[1915]: time="2025-05-08T00:14:07.266844744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:07.267047 containerd[1915]: time="2025-05-08T00:14:07.266928083Z" level=info msg="RemovePodSandbox \"428a297cc6348a293897dd5d9ddc2c19017615706d962ecd538037395df630a8\" returns successfully" May 8 00:14:07.274850 containerd[1915]: time="2025-05-08T00:14:07.273365369Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" May 8 00:14:07.274850 containerd[1915]: time="2025-05-08T00:14:07.273504215Z" level=info msg="TearDown network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" successfully" May 8 00:14:07.274850 containerd[1915]: time="2025-05-08T00:14:07.273522666Z" level=info msg="StopPodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" returns successfully" May 8 00:14:07.279692 containerd[1915]: time="2025-05-08T00:14:07.279178289Z" level=info msg="RemovePodSandbox for \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" May 8 00:14:07.279692 containerd[1915]: time="2025-05-08T00:14:07.279222536Z" level=info msg="Forcibly stopping sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\"" May 8 00:14:07.279692 containerd[1915]: time="2025-05-08T00:14:07.279319071Z" level=info msg="TearDown network for sandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" successfully" May 8 00:14:07.289865 containerd[1915]: time="2025-05-08T00:14:07.288382140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:07.289865 containerd[1915]: time="2025-05-08T00:14:07.288467901Z" level=info msg="RemovePodSandbox \"e4ceb2efdaee39cbc4b57addd2fce6ccaf9753e5c5576bd0869d2a6fee5bac97\" returns successfully" May 8 00:14:07.297363 containerd[1915]: time="2025-05-08T00:14:07.295772862Z" level=info msg="StopPodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\"" May 8 00:14:07.297363 containerd[1915]: time="2025-05-08T00:14:07.295919356Z" level=info msg="TearDown network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" successfully" May 8 00:14:07.297363 containerd[1915]: time="2025-05-08T00:14:07.295938124Z" level=info msg="StopPodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" returns successfully" May 8 00:14:07.305694 containerd[1915]: time="2025-05-08T00:14:07.303561313Z" level=info msg="RemovePodSandbox for \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\"" May 8 00:14:07.305694 containerd[1915]: time="2025-05-08T00:14:07.303600401Z" level=info msg="Forcibly stopping sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\"" May 8 00:14:07.305694 containerd[1915]: time="2025-05-08T00:14:07.303697297Z" level=info msg="TearDown network for sandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" successfully" May 8 00:14:07.310121 containerd[1915]: time="2025-05-08T00:14:07.309472055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:07.310121 containerd[1915]: time="2025-05-08T00:14:07.309540406Z" level=info msg="RemovePodSandbox \"559cc2652d83d0bb60438c1ad577eb271c1f74ba0971457c10744db610c168d0\" returns successfully" May 8 00:14:07.313655 containerd[1915]: time="2025-05-08T00:14:07.312420944Z" level=info msg="StopPodSandbox for \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\"" May 8 00:14:07.313655 containerd[1915]: time="2025-05-08T00:14:07.312545945Z" level=info msg="TearDown network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\" successfully" May 8 00:14:07.313655 containerd[1915]: time="2025-05-08T00:14:07.312561437Z" level=info msg="StopPodSandbox for \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\" returns successfully" May 8 00:14:07.317027 containerd[1915]: time="2025-05-08T00:14:07.316740828Z" level=info msg="RemovePodSandbox for \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\"" May 8 00:14:07.317027 containerd[1915]: time="2025-05-08T00:14:07.316777592Z" level=info msg="Forcibly stopping sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\"" May 8 00:14:07.317027 containerd[1915]: time="2025-05-08T00:14:07.316887771Z" level=info msg="TearDown network for sandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\" successfully" May 8 00:14:07.321411 containerd[1915]: time="2025-05-08T00:14:07.321372322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:07.321546 containerd[1915]: time="2025-05-08T00:14:07.321432802Z" level=info msg="RemovePodSandbox \"f94bfbc247f0ccc17e7a3c916131ff5964b9a8bee494ac3d533e2876209f746d\" returns successfully" May 8 00:14:07.416365 kubelet[3169]: I0508 00:14:07.416168 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842" May 8 00:14:07.420188 containerd[1915]: time="2025-05-08T00:14:07.419652175Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:14:07.420188 containerd[1915]: time="2025-05-08T00:14:07.419994925Z" level=info msg="Ensure that sandbox 025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842 in task-service has been cleanup successfully" May 8 00:14:07.423313 containerd[1915]: time="2025-05-08T00:14:07.423279254Z" level=info msg="TearDown network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" successfully" May 8 00:14:07.424827 containerd[1915]: time="2025-05-08T00:14:07.424205793Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" returns successfully" May 8 00:14:07.426455 systemd[1]: run-netns-cni\x2d6f8cac05\x2dc72c\x2dfed6\x2d4a09\x2d03ea91dcd639.mount: Deactivated successfully. May 8 00:14:07.442258 containerd[1915]: time="2025-05-08T00:14:07.442207938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:5,}" May 8 00:14:07.592668 containerd[1915]: time="2025-05-08T00:14:07.592616640Z" level=error msg="Failed to destroy network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:07.595487 containerd[1915]: time="2025-05-08T00:14:07.593427398Z" level=error msg="encountered an error cleaning up failed sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:07.595487 containerd[1915]: time="2025-05-08T00:14:07.593506658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:07.595880 kubelet[3169]: E0508 00:14:07.595832 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:07.595978 kubelet[3169]: E0508 00:14:07.595898 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:07.595978 kubelet[3169]: E0508 00:14:07.595929 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:07.596072 kubelet[3169]: E0508 00:14:07.595984 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:07.597752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c-shm.mount: Deactivated successfully. May 8 00:14:08.427715 kubelet[3169]: I0508 00:14:08.427679 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c" May 8 00:14:08.431101 containerd[1915]: time="2025-05-08T00:14:08.429459399Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" May 8 00:14:08.433707 containerd[1915]: time="2025-05-08T00:14:08.432154053Z" level=info msg="Ensure that sandbox 0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c in task-service has been cleanup successfully" May 8 00:14:08.435018 containerd[1915]: time="2025-05-08T00:14:08.434951281Z" level=info msg="TearDown network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" successfully" May 8 00:14:08.436581 containerd[1915]: time="2025-05-08T00:14:08.436424765Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" returns successfully" May 8 00:14:08.438550 systemd[1]: run-netns-cni\x2d1d293e62\x2ddc22\x2d2189\x2d5175\x2d2f047a03ec66.mount: Deactivated successfully. May 8 00:14:08.440694 containerd[1915]: time="2025-05-08T00:14:08.438804034Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:14:08.440694 containerd[1915]: time="2025-05-08T00:14:08.439117472Z" level=info msg="TearDown network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" successfully" May 8 00:14:08.440694 containerd[1915]: time="2025-05-08T00:14:08.439133877Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" returns successfully" May 8 00:14:08.441894 containerd[1915]: time="2025-05-08T00:14:08.441199247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:6,}" May 8 00:14:08.718797 containerd[1915]: time="2025-05-08T00:14:08.718311834Z" level=error msg="Failed to destroy network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:08.720786 containerd[1915]: time="2025-05-08T00:14:08.719576506Z" level=error msg="encountered an error cleaning up failed sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:08.720786 containerd[1915]: time="2025-05-08T00:14:08.719653092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:08.720964 kubelet[3169]: E0508 00:14:08.720368 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:08.720964 kubelet[3169]: E0508 00:14:08.720436 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:08.720964 kubelet[3169]: E0508 00:14:08.720468 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:08.721173 kubelet[3169]: E0508 00:14:08.720519 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:08.727799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb-shm.mount: Deactivated successfully. May 8 00:14:09.131138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523208936.mount: Deactivated successfully. May 8 00:14:09.294940 containerd[1915]: time="2025-05-08T00:14:09.294889984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:09.306500 containerd[1915]: time="2025-05-08T00:14:09.306394808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:14:09.325717 containerd[1915]: time="2025-05-08T00:14:09.325675074Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:09.328738 containerd[1915]: time="2025-05-08T00:14:09.328247267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:09.331300 containerd[1915]: time="2025-05-08T00:14:09.331206764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.973007222s" May 8 00:14:09.331300 containerd[1915]: time="2025-05-08T00:14:09.331296169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:14:09.370347 containerd[1915]: time="2025-05-08T00:14:09.370306472Z" level=info msg="CreateContainer within sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:14:09.453159 containerd[1915]: time="2025-05-08T00:14:09.452801926Z" level=info msg="CreateContainer within sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\"" May 8 00:14:09.459249 kubelet[3169]: I0508 00:14:09.459037 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb" May 8 00:14:09.460025 containerd[1915]: time="2025-05-08T00:14:09.459767859Z" level=info msg="StopPodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\"" May 8 00:14:09.460025 containerd[1915]: time="2025-05-08T00:14:09.459977039Z" level=info msg="Ensure that sandbox d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb in task-service has been cleanup successfully" May 8 00:14:09.462667 containerd[1915]: time="2025-05-08T00:14:09.461992611Z" level=info msg="TearDown network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" successfully" May 8 00:14:09.462667 containerd[1915]: time="2025-05-08T00:14:09.462015706Z" level=info msg="StopPodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" returns successfully" May 8 00:14:09.462667 containerd[1915]: time="2025-05-08T00:14:09.462359226Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" May 8 00:14:09.462667 containerd[1915]: time="2025-05-08T00:14:09.462463921Z" level=info msg="TearDown network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" successfully" May 8 00:14:09.462667 containerd[1915]: time="2025-05-08T00:14:09.462475149Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" returns successfully" May 8 00:14:09.462864 containerd[1915]: time="2025-05-08T00:14:09.462707876Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:14:09.462864 containerd[1915]: time="2025-05-08T00:14:09.462791065Z" level=info msg="TearDown network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" successfully" May 8 00:14:09.462864 containerd[1915]: time="2025-05-08T00:14:09.462800439Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" returns successfully" May 8 00:14:09.465064 containerd[1915]: time="2025-05-08T00:14:09.463953891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:7,}" May 8 00:14:09.465394 systemd[1]: run-netns-cni\x2dbd11ed67\x2d5527\x2da947\x2d34bc\x2d7971abe2481d.mount: Deactivated successfully. May 8 00:14:09.524462 containerd[1915]: time="2025-05-08T00:14:09.524317680Z" level=info msg="StartContainer for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\"" May 8 00:14:09.570657 containerd[1915]: time="2025-05-08T00:14:09.570608988Z" level=error msg="Failed to destroy network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:09.573550 containerd[1915]: time="2025-05-08T00:14:09.573506244Z" level=error msg="encountered an error cleaning up failed sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:09.574331 containerd[1915]: time="2025-05-08T00:14:09.573580341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:09.574414 kubelet[3169]: E0508 00:14:09.573867 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:14:09.574414 kubelet[3169]: E0508 00:14:09.573932 3169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:09.574414 kubelet[3169]: E0508 00:14:09.573955 3169 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znh85" May 8 00:14:09.573738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868-shm.mount: Deactivated successfully. May 8 00:14:09.574684 kubelet[3169]: E0508 00:14:09.574013 3169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znh85_calico-system(ce29167f-2f9b-4aa2-9647-1f758fb55a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znh85" podUID="ce29167f-2f9b-4aa2-9647-1f758fb55a45" May 8 00:14:09.741159 systemd[1]: Started cri-containerd-5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa.scope - libcontainer container 5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa. May 8 00:14:09.802137 containerd[1915]: time="2025-05-08T00:14:09.801324378Z" level=info msg="StartContainer for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" returns successfully" May 8 00:14:09.991140 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:14:09.991865 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:14:10.240196 systemd[1]: Started sshd@14-172.31.16.158:22-139.178.68.195:39038.service - OpenSSH per-connection server daemon (139.178.68.195:39038). May 8 00:14:10.464147 sshd[4369]: Accepted publickey for core from 139.178.68.195 port 39038 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:10.465751 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:10.470708 systemd-logind[1899]: New session 15 of user core. May 8 00:14:10.476976 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:14:10.536910 kubelet[3169]: I0508 00:14:10.536083 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868" May 8 00:14:10.537320 containerd[1915]: time="2025-05-08T00:14:10.536746323Z" level=info msg="StopPodSandbox for \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\"" May 8 00:14:10.540383 kubelet[3169]: I0508 00:14:10.538641 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fq75b" podStartSLOduration=2.258847979 podStartE2EDuration="20.502707856s" podCreationTimestamp="2025-05-08 00:13:50 +0000 UTC" firstStartedPulling="2025-05-08 00:13:51.095101583 +0000 UTC m=+44.091325462" lastFinishedPulling="2025-05-08 00:14:09.338961472 +0000 UTC m=+62.335185339" observedRunningTime="2025-05-08 00:14:10.502402509 +0000 UTC m=+63.498626435" watchObservedRunningTime="2025-05-08 00:14:10.502707856 +0000 UTC m=+63.498931741" May 8 00:14:10.540909 containerd[1915]: time="2025-05-08T00:14:10.540651788Z" level=info msg="Ensure that sandbox 203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868 in task-service has been cleanup successfully" May 8 00:14:10.541357 containerd[1915]: time="2025-05-08T00:14:10.541085451Z" level=info msg="TearDown network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\" successfully" May 8 00:14:10.541357 containerd[1915]: time="2025-05-08T00:14:10.541304838Z" level=info msg="StopPodSandbox for \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\" returns successfully" May 8 00:14:10.544488 systemd[1]: run-netns-cni\x2dd33dda01\x2ddac9\x2d8a85\x2d5419\x2d066a5a674526.mount: Deactivated successfully. May 8 00:14:10.546591 containerd[1915]: time="2025-05-08T00:14:10.546433186Z" level=info msg="StopPodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\"" May 8 00:14:10.546591 containerd[1915]: time="2025-05-08T00:14:10.546526477Z" level=info msg="TearDown network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" successfully" May 8 00:14:10.546591 containerd[1915]: time="2025-05-08T00:14:10.546568080Z" level=info msg="StopPodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" returns successfully" May 8 00:14:10.557684 containerd[1915]: time="2025-05-08T00:14:10.557651850Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" May 8 00:14:10.558063 containerd[1915]: time="2025-05-08T00:14:10.558046866Z" level=info msg="TearDown network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" successfully" May 8 00:14:10.558146 containerd[1915]: time="2025-05-08T00:14:10.558135631Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" returns successfully" May 8 00:14:10.558609 containerd[1915]: time="2025-05-08T00:14:10.558590995Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:14:10.558751 containerd[1915]: time="2025-05-08T00:14:10.558739670Z" level=info msg="TearDown network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" successfully" May 8 00:14:10.558948 containerd[1915]: time="2025-05-08T00:14:10.558933496Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" returns successfully" May 8 00:14:10.559395 containerd[1915]: time="2025-05-08T00:14:10.559374810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:8,}" May 8 00:14:10.576686 systemd[1]: run-containerd-runc-k8s.io-5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa-runc.SYpRfz.mount: Deactivated successfully. May 8 00:14:10.840496 sshd[4382]: Connection closed by 139.178.68.195 port 39038 May 8 00:14:10.842069 sshd-session[4369]: pam_unix(sshd:session): session closed for user core May 8 00:14:10.846978 systemd-logind[1899]: Session 15 logged out. Waiting for processes to exit. May 8 00:14:10.847415 systemd[1]: sshd@14-172.31.16.158:22-139.178.68.195:39038.service: Deactivated successfully. May 8 00:14:10.849992 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:14:10.851487 systemd-logind[1899]: Removed session 15. May 8 00:14:11.887839 kernel: bpftool[4561]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:14:12.205524 systemd-networkd[1723]: vxlan.calico: Link UP May 8 00:14:12.205536 systemd-networkd[1723]: vxlan.calico: Gained carrier May 8 00:14:12.210378 (udev-worker)[4596]: Network interface NamePolicy= disabled on kernel command line. May 8 00:14:12.277594 (udev-worker)[4354]: Network interface NamePolicy= disabled on kernel command line. May 8 00:14:12.579847 systemd[1]: run-containerd-runc-k8s.io-5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa-runc.aLzXvd.mount: Deactivated successfully. May 8 00:14:13.221677 systemd-networkd[1723]: calif7cad1708cd: Link UP May 8 00:14:13.222653 systemd-networkd[1723]: calif7cad1708cd: Gained carrier May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:10.654 [INFO][4395] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:10.863 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0 csi-node-driver- calico-system ce29167f-2f9b-4aa2-9647-1f758fb55a45 697 0 2025-05-08 00:13:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-158 csi-node-driver-znh85 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7cad1708cd [] []}} ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:10.863 [INFO][4395] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.144 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" HandleID="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Workload="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.163 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" HandleID="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Workload="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125b90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-158", "pod":"csi-node-driver-znh85", "timestamp":"2025-05-08 00:14:13.144012275 +0000 UTC"}, Hostname:"ip-172-31-16-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.163 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.163 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.163 [INFO][4432] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-158' May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.169 [INFO][4432] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.180 [INFO][4432] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.186 [INFO][4432] ipam/ipam.go 489: Trying affinity for 192.168.102.128/26 host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.188 [INFO][4432] ipam/ipam.go 155: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.191 [INFO][4432] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.191 [INFO][4432] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.193 [INFO][4432] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.197 [INFO][4432] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.207 [INFO][4432] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.102.129/26] block=192.168.102.128/26 handle="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.207 [INFO][4432] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.102.129/26] handle="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" host="ip-172-31-16-158" May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.207 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:13.243110 containerd[1915]: 2025-05-08 00:14:13.207 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.129/26] IPv6=[] ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" HandleID="k8s-pod-network.e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Workload="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.273480 containerd[1915]: 2025-05-08 00:14:13.212 [INFO][4395] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce29167f-2f9b-4aa2-9647-1f758fb55a45", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-158", ContainerID:"", Pod:"csi-node-driver-znh85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7cad1708cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:13.273480 containerd[1915]: 2025-05-08 00:14:13.212 [INFO][4395] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.102.129/32] ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.273480 containerd[1915]: 2025-05-08 00:14:13.212 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7cad1708cd ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.273480 containerd[1915]: 2025-05-08 00:14:13.223 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.273480 containerd[1915]: 2025-05-08 00:14:13.223 [INFO][4395] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce29167f-2f9b-4aa2-9647-1f758fb55a45", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-158", ContainerID:"e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c", Pod:"csi-node-driver-znh85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7cad1708cd", MAC:"f6:d1:4b:e6:c9:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:13.273480 containerd[1915]: 2025-05-08 00:14:13.238 [INFO][4395] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c" Namespace="calico-system" Pod="csi-node-driver-znh85" WorkloadEndpoint="ip--172--31--16--158-k8s-csi--node--driver--znh85-eth0" May 8 00:14:13.274657 containerd[1915]: time="2025-05-08T00:14:13.274476748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:13.274657 containerd[1915]: time="2025-05-08T00:14:13.274554242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:13.274838 containerd[1915]: time="2025-05-08T00:14:13.274701698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:13.275490 containerd[1915]: time="2025-05-08T00:14:13.275410313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:13.309035 systemd[1]: Started cri-containerd-e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c.scope - libcontainer container e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c. May 8 00:14:13.349649 containerd[1915]: time="2025-05-08T00:14:13.349610407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znh85,Uid:ce29167f-2f9b-4aa2-9647-1f758fb55a45,Namespace:calico-system,Attempt:8,} returns sandbox id \"e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c\"" May 8 00:14:13.351997 containerd[1915]: time="2025-05-08T00:14:13.351958882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:14:13.911970 systemd-networkd[1723]: vxlan.calico: Gained IPv6LL May 8 00:14:14.745327 systemd-networkd[1723]: calif7cad1708cd: Gained IPv6LL May 8 00:14:14.859871 containerd[1915]: time="2025-05-08T00:14:14.859796527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:14.860798 containerd[1915]: time="2025-05-08T00:14:14.860749394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:14:14.863133 containerd[1915]: time="2025-05-08T00:14:14.862184882Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:14.865454 containerd[1915]: time="2025-05-08T00:14:14.864401087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:14.865454 containerd[1915]: time="2025-05-08T00:14:14.865315932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.513316204s" May 8 00:14:14.865454 containerd[1915]: time="2025-05-08T00:14:14.865345343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:14:14.868159 containerd[1915]: time="2025-05-08T00:14:14.868054429Z" level=info msg="CreateContainer within sandbox \"e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:14:14.911614 containerd[1915]: time="2025-05-08T00:14:14.911554912Z" level=info msg="CreateContainer within sandbox \"e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ba015c91d796b6ff661a98710c28162600beb8853bb8b907368796df3dd6025c\"" May 8 00:14:14.914020 containerd[1915]: time="2025-05-08T00:14:14.913989184Z" level=info msg="StartContainer for \"ba015c91d796b6ff661a98710c28162600beb8853bb8b907368796df3dd6025c\"" May 8 00:14:14.963859 systemd[1]: Started cri-containerd-ba015c91d796b6ff661a98710c28162600beb8853bb8b907368796df3dd6025c.scope - libcontainer container ba015c91d796b6ff661a98710c28162600beb8853bb8b907368796df3dd6025c. May 8 00:14:15.049725 containerd[1915]: time="2025-05-08T00:14:15.049014210Z" level=info msg="StartContainer for \"ba015c91d796b6ff661a98710c28162600beb8853bb8b907368796df3dd6025c\" returns successfully" May 8 00:14:15.052643 containerd[1915]: time="2025-05-08T00:14:15.052604708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:14:15.882126 systemd[1]: Started sshd@15-172.31.16.158:22-139.178.68.195:58874.service - OpenSSH per-connection server daemon (139.178.68.195:58874). May 8 00:14:16.073572 sshd[4776]: Accepted publickey for core from 139.178.68.195 port 58874 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:16.097643 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:16.102561 systemd-logind[1899]: New session 16 of user core. May 8 00:14:16.110028 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:14:16.553637 sshd[4779]: Connection closed by 139.178.68.195 port 58874 May 8 00:14:16.554868 sshd-session[4776]: pam_unix(sshd:session): session closed for user core May 8 00:14:16.560672 systemd[1]: sshd@15-172.31.16.158:22-139.178.68.195:58874.service: Deactivated successfully. May 8 00:14:16.560994 systemd-logind[1899]: Session 16 logged out. Waiting for processes to exit. May 8 00:14:16.563728 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:14:16.567613 systemd-logind[1899]: Removed session 16. May 8 00:14:16.734173 containerd[1915]: time="2025-05-08T00:14:16.734120856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:16.735264 containerd[1915]: time="2025-05-08T00:14:16.735070473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:14:16.737550 containerd[1915]: time="2025-05-08T00:14:16.736184005Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:16.739372 containerd[1915]: time="2025-05-08T00:14:16.738608261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:16.739372 containerd[1915]: time="2025-05-08T00:14:16.739259126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.686610433s" May 8 00:14:16.739372 containerd[1915]: time="2025-05-08T00:14:16.739288561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:14:16.742040 containerd[1915]: time="2025-05-08T00:14:16.741977854Z" level=info msg="CreateContainer within sandbox \"e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:14:16.765693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922512868.mount: Deactivated successfully. May 8 00:14:16.767897 containerd[1915]: time="2025-05-08T00:14:16.767803435Z" level=info msg="CreateContainer within sandbox \"e925b1734dd8eaf0c8889a319472777173d05f33e73bc412a96b866f3e50d16c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aaf059570b678ceb60acaa4aa9aebb29474a1760ddcef82ee71498b5524630f4\"" May 8 00:14:16.768598 containerd[1915]: time="2025-05-08T00:14:16.768569319Z" level=info msg="StartContainer for \"aaf059570b678ceb60acaa4aa9aebb29474a1760ddcef82ee71498b5524630f4\"" May 8 00:14:16.826056 systemd[1]: Started cri-containerd-aaf059570b678ceb60acaa4aa9aebb29474a1760ddcef82ee71498b5524630f4.scope - libcontainer container aaf059570b678ceb60acaa4aa9aebb29474a1760ddcef82ee71498b5524630f4. May 8 00:14:16.857871 containerd[1915]: time="2025-05-08T00:14:16.857751600Z" level=info msg="StartContainer for \"aaf059570b678ceb60acaa4aa9aebb29474a1760ddcef82ee71498b5524630f4\" returns successfully" May 8 00:14:17.356160 kubelet[3169]: I0508 00:14:17.356101 3169 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:14:17.356160 kubelet[3169]: I0508 00:14:17.356162 3169 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:14:17.440328 ntpd[1891]: Listen normally on 7 vxlan.calico 192.168.102.128:123 May 8 00:14:17.465756 ntpd[1891]: 8 May 00:14:17 ntpd[1891]: Listen normally on 7 vxlan.calico 192.168.102.128:123 May 8 00:14:17.465756 ntpd[1891]: 8 May 00:14:17 ntpd[1891]: Listen normally on 8 vxlan.calico [fe80::64ee:82ff:fedd:ac99%4]:123 May 8 00:14:17.465756 ntpd[1891]: 8 May 00:14:17 ntpd[1891]: Listen normally on 9 calif7cad1708cd [fe80::ecee:eeff:feee:eeee%7]:123 May 8 00:14:17.440415 ntpd[1891]: Listen normally on 8 vxlan.calico [fe80::64ee:82ff:fedd:ac99%4]:123 May 8 00:14:17.440468 ntpd[1891]: Listen normally on 9 calif7cad1708cd [fe80::ecee:eeff:feee:eeee%7]:123 May 8 00:14:21.591224 systemd[1]: Started sshd@16-172.31.16.158:22-139.178.68.195:58878.service - OpenSSH per-connection server daemon (139.178.68.195:58878). May 8 00:14:21.786728 sshd[4851]: Accepted publickey for core from 139.178.68.195 port 58878 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:21.788453 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:21.794100 systemd-logind[1899]: New session 17 of user core. May 8 00:14:21.806104 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:14:22.158354 kubelet[3169]: I0508 00:14:22.158279 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-znh85" podStartSLOduration=28.764479293 podStartE2EDuration="32.153711917s" podCreationTimestamp="2025-05-08 00:13:50 +0000 UTC" firstStartedPulling="2025-05-08 00:14:13.350948959 +0000 UTC m=+66.347172823" lastFinishedPulling="2025-05-08 00:14:16.74018158 +0000 UTC m=+69.736405447" observedRunningTime="2025-05-08 00:14:17.576204007 +0000 UTC m=+70.572427893" watchObservedRunningTime="2025-05-08 00:14:22.153711917 +0000 UTC m=+75.149935805" May 8 00:14:22.178691 sshd[4853]: Connection closed by 139.178.68.195 port 58878 May 8 00:14:22.180055 sshd-session[4851]: pam_unix(sshd:session): session closed for user core May 8 00:14:22.191201 systemd[1]: sshd@16-172.31.16.158:22-139.178.68.195:58878.service: Deactivated successfully. May 8 00:14:22.195931 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:14:22.198514 systemd-logind[1899]: Session 17 logged out. Waiting for processes to exit. May 8 00:14:22.205326 systemd-logind[1899]: Removed session 17. May 8 00:14:22.216047 systemd[1]: Created slice kubepods-besteffort-podd29fd0d3_050b_416f_a3ea_5cca6b716f15.slice - libcontainer container kubepods-besteffort-podd29fd0d3_050b_416f_a3ea_5cca6b716f15.slice. May 8 00:14:22.272873 kubelet[3169]: I0508 00:14:22.272828 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d29fd0d3-050b-416f-a3ea-5cca6b716f15-calico-apiserver-certs\") pod \"calico-apiserver-57cd557d9d-gsvqk\" (UID: \"d29fd0d3-050b-416f-a3ea-5cca6b716f15\") " pod="calico-apiserver/calico-apiserver-57cd557d9d-gsvqk" May 8 00:14:22.273179 kubelet[3169]: I0508 00:14:22.272888 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24h4v\" (UniqueName: \"kubernetes.io/projected/d29fd0d3-050b-416f-a3ea-5cca6b716f15-kube-api-access-24h4v\") pod \"calico-apiserver-57cd557d9d-gsvqk\" (UID: \"d29fd0d3-050b-416f-a3ea-5cca6b716f15\") " pod="calico-apiserver/calico-apiserver-57cd557d9d-gsvqk" May 8 00:14:22.529840 containerd[1915]: time="2025-05-08T00:14:22.529386976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57cd557d9d-gsvqk,Uid:d29fd0d3-050b-416f-a3ea-5cca6b716f15,Namespace:calico-apiserver,Attempt:0,}" May 8 00:14:22.712007 systemd-networkd[1723]: cali22b24c36fdb: Link UP May 8 00:14:22.714913 systemd-networkd[1723]: cali22b24c36fdb: Gained carrier May 8 00:14:22.722031 (udev-worker)[4892]: Network interface NamePolicy= disabled on kernel command line. May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.613 [INFO][4868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0 calico-apiserver-57cd557d9d- calico-apiserver d29fd0d3-050b-416f-a3ea-5cca6b716f15 1210 0 2025-05-08 00:14:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57cd557d9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-158 calico-apiserver-57cd557d9d-gsvqk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22b24c36fdb [] []}} ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.614 [INFO][4868] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.648 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" HandleID="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Workload="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.660 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" HandleID="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Workload="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312f60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-158", "pod":"calico-apiserver-57cd557d9d-gsvqk", "timestamp":"2025-05-08 00:14:22.648399738 +0000 UTC"}, Hostname:"ip-172-31-16-158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.660 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.660 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.660 [INFO][4880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-158' May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.663 [INFO][4880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.667 [INFO][4880] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.677 [INFO][4880] ipam/ipam.go 489: Trying affinity for 192.168.102.128/26 host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.679 [INFO][4880] ipam/ipam.go 155: Attempting to load block cidr=192.168.102.128/26 host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.682 [INFO][4880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.102.128/26 host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.682 [INFO][4880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.102.128/26 handle="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.684 [INFO][4880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608 May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.689 [INFO][4880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.102.128/26 handle="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.701 [INFO][4880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.102.130/26] block=192.168.102.128/26 handle="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.701 [INFO][4880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.102.130/26] handle="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" host="ip-172-31-16-158" May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.701 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:22.744222 containerd[1915]: 2025-05-08 00:14:22.701 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.130/26] IPv6=[] ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" HandleID="k8s-pod-network.47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Workload="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.746396 containerd[1915]: 2025-05-08 00:14:22.707 [INFO][4868] cni-plugin/k8s.go 386: Populated endpoint ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0", GenerateName:"calico-apiserver-57cd557d9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d29fd0d3-050b-416f-a3ea-5cca6b716f15", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 14, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57cd557d9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-158", ContainerID:"", Pod:"calico-apiserver-57cd557d9d-gsvqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22b24c36fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:22.746396 containerd[1915]: 2025-05-08 00:14:22.708 [INFO][4868] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.102.130/32] ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.746396 containerd[1915]: 2025-05-08 00:14:22.708 [INFO][4868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22b24c36fdb ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.746396 containerd[1915]: 2025-05-08 00:14:22.711 [INFO][4868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.746396 containerd[1915]: 2025-05-08 00:14:22.715 [INFO][4868] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0", GenerateName:"calico-apiserver-57cd557d9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d29fd0d3-050b-416f-a3ea-5cca6b716f15", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 14, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57cd557d9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-158", ContainerID:"47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608", Pod:"calico-apiserver-57cd557d9d-gsvqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22b24c36fdb", MAC:"7a:12:a2:8a:a2:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:22.746396 containerd[1915]: 2025-05-08 00:14:22.733 [INFO][4868] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608" Namespace="calico-apiserver" Pod="calico-apiserver-57cd557d9d-gsvqk" WorkloadEndpoint="ip--172--31--16--158-k8s-calico--apiserver--57cd557d9d--gsvqk-eth0" May 8 00:14:22.787750 containerd[1915]: time="2025-05-08T00:14:22.787408086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:22.787750 containerd[1915]: time="2025-05-08T00:14:22.787503962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:22.787750 containerd[1915]: time="2025-05-08T00:14:22.787523777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:22.788288 containerd[1915]: time="2025-05-08T00:14:22.788201201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:22.814055 systemd[1]: Started cri-containerd-47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608.scope - libcontainer container 47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608. May 8 00:14:22.859253 containerd[1915]: time="2025-05-08T00:14:22.859204575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57cd557d9d-gsvqk,Uid:d29fd0d3-050b-416f-a3ea-5cca6b716f15,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608\"" May 8 00:14:22.861540 containerd[1915]: time="2025-05-08T00:14:22.861225519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:14:24.667054 systemd-networkd[1723]: cali22b24c36fdb: Gained IPv6LL May 8 00:14:25.693877 containerd[1915]: time="2025-05-08T00:14:25.693832272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:25.695359 containerd[1915]: time="2025-05-08T00:14:25.695284174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:14:25.696568 containerd[1915]: time="2025-05-08T00:14:25.696294049Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:25.699003 containerd[1915]: time="2025-05-08T00:14:25.698962987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:25.699890 containerd[1915]: time="2025-05-08T00:14:25.699860984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.838599279s" May 8 00:14:25.700010 containerd[1915]: time="2025-05-08T00:14:25.699995220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:14:25.703220 containerd[1915]: time="2025-05-08T00:14:25.703189104Z" level=info msg="CreateContainer within sandbox \"47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:14:25.719144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101392387.mount: Deactivated successfully. May 8 00:14:25.722510 containerd[1915]: time="2025-05-08T00:14:25.722467329Z" level=info msg="CreateContainer within sandbox \"47e6e7837113660baf8009e0d1a441723957f3dd5aede496e4a381a46b1d0608\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e360645d25721fbce2a4135ab788d14afc709e2e8bd1cb189583d6283702bd65\"" May 8 00:14:25.723140 containerd[1915]: time="2025-05-08T00:14:25.723110997Z" level=info msg="StartContainer for \"e360645d25721fbce2a4135ab788d14afc709e2e8bd1cb189583d6283702bd65\"" May 8 00:14:25.764050 systemd[1]: Started cri-containerd-e360645d25721fbce2a4135ab788d14afc709e2e8bd1cb189583d6283702bd65.scope - libcontainer container e360645d25721fbce2a4135ab788d14afc709e2e8bd1cb189583d6283702bd65. May 8 00:14:25.812763 containerd[1915]: time="2025-05-08T00:14:25.812635527Z" level=info msg="StartContainer for \"e360645d25721fbce2a4135ab788d14afc709e2e8bd1cb189583d6283702bd65\" returns successfully" May 8 00:14:26.924910 kubelet[3169]: I0508 00:14:26.924539 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57cd557d9d-gsvqk" podStartSLOduration=2.083448366 podStartE2EDuration="4.924516695s" podCreationTimestamp="2025-05-08 00:14:22 +0000 UTC" firstStartedPulling="2025-05-08 00:14:22.86064395 +0000 UTC m=+75.856867814" lastFinishedPulling="2025-05-08 00:14:25.701712276 +0000 UTC m=+78.697936143" observedRunningTime="2025-05-08 00:14:26.605824894 +0000 UTC m=+79.602048771" watchObservedRunningTime="2025-05-08 00:14:26.924516695 +0000 UTC m=+79.920740586" May 8 00:14:27.215100 systemd[1]: Started sshd@17-172.31.16.158:22-139.178.68.195:54654.service - OpenSSH per-connection server daemon (139.178.68.195:54654). May 8 00:14:27.422265 sshd[4999]: Accepted publickey for core from 139.178.68.195 port 54654 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:27.426043 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:27.435685 systemd-logind[1899]: New session 18 of user core. May 8 00:14:27.440301 ntpd[1891]: Listen normally on 10 cali22b24c36fdb [fe80::ecee:eeff:feee:eeee%8]:123 May 8 00:14:27.442481 ntpd[1891]: 8 May 00:14:27 ntpd[1891]: Listen normally on 10 cali22b24c36fdb [fe80::ecee:eeff:feee:eeee%8]:123 May 8 00:14:27.442666 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:14:27.924900 sshd[5001]: Connection closed by 139.178.68.195 port 54654 May 8 00:14:27.925339 sshd-session[4999]: pam_unix(sshd:session): session closed for user core May 8 00:14:27.930543 systemd[1]: sshd@17-172.31.16.158:22-139.178.68.195:54654.service: Deactivated successfully. May 8 00:14:27.932753 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:14:27.934964 systemd-logind[1899]: Session 18 logged out. Waiting for processes to exit. May 8 00:14:27.936381 systemd-logind[1899]: Removed session 18. May 8 00:14:32.963318 systemd[1]: Started sshd@18-172.31.16.158:22-139.178.68.195:54670.service - OpenSSH per-connection server daemon (139.178.68.195:54670). May 8 00:14:33.161327 sshd[5024]: Accepted publickey for core from 139.178.68.195 port 54670 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:33.163050 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:33.168579 systemd-logind[1899]: New session 19 of user core. May 8 00:14:33.174051 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:14:33.415426 sshd[5034]: Connection closed by 139.178.68.195 port 54670 May 8 00:14:33.417070 sshd-session[5024]: pam_unix(sshd:session): session closed for user core May 8 00:14:33.420538 systemd[1]: sshd@18-172.31.16.158:22-139.178.68.195:54670.service: Deactivated successfully. May 8 00:14:33.422780 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:14:33.423674 systemd-logind[1899]: Session 19 logged out. Waiting for processes to exit. May 8 00:14:33.425426 systemd-logind[1899]: Removed session 19. May 8 00:14:33.455168 systemd[1]: Started sshd@19-172.31.16.158:22-139.178.68.195:54682.service - OpenSSH per-connection server daemon (139.178.68.195:54682). May 8 00:14:33.616617 sshd[5046]: Accepted publickey for core from 139.178.68.195 port 54682 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:33.618153 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:33.624504 systemd-logind[1899]: New session 20 of user core. May 8 00:14:33.627023 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:14:36.443660 sshd[5048]: Connection closed by 139.178.68.195 port 54682 May 8 00:14:36.444623 sshd-session[5046]: pam_unix(sshd:session): session closed for user core May 8 00:14:36.452598 systemd[1]: sshd@19-172.31.16.158:22-139.178.68.195:54682.service: Deactivated successfully. May 8 00:14:36.455116 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:14:36.455929 systemd-logind[1899]: Session 20 logged out. Waiting for processes to exit. May 8 00:14:36.457205 systemd-logind[1899]: Removed session 20. May 8 00:14:36.480737 systemd[1]: Started sshd@20-172.31.16.158:22-139.178.68.195:40920.service - OpenSSH per-connection server daemon (139.178.68.195:40920). May 8 00:14:36.668422 sshd[5058]: Accepted publickey for core from 139.178.68.195 port 40920 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:36.670427 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:36.676488 systemd-logind[1899]: New session 21 of user core. May 8 00:14:36.682055 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:14:37.806936 sshd[5060]: Connection closed by 139.178.68.195 port 40920 May 8 00:14:37.809397 sshd-session[5058]: pam_unix(sshd:session): session closed for user core May 8 00:14:37.814860 systemd[1]: sshd@20-172.31.16.158:22-139.178.68.195:40920.service: Deactivated successfully. May 8 00:14:37.819117 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:14:37.820339 systemd-logind[1899]: Session 21 logged out. Waiting for processes to exit. May 8 00:14:37.822700 systemd-logind[1899]: Removed session 21. May 8 00:14:37.840240 systemd[1]: Started sshd@21-172.31.16.158:22-139.178.68.195:40930.service - OpenSSH per-connection server daemon (139.178.68.195:40930). May 8 00:14:38.018517 sshd[5080]: Accepted publickey for core from 139.178.68.195 port 40930 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:38.019965 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:38.025405 systemd-logind[1899]: New session 22 of user core. May 8 00:14:38.032070 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:14:38.561913 sshd[5083]: Connection closed by 139.178.68.195 port 40930 May 8 00:14:38.563423 sshd-session[5080]: pam_unix(sshd:session): session closed for user core May 8 00:14:38.567166 systemd-logind[1899]: Session 22 logged out. Waiting for processes to exit. May 8 00:14:38.568054 systemd[1]: sshd@21-172.31.16.158:22-139.178.68.195:40930.service: Deactivated successfully. May 8 00:14:38.570429 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:14:38.571274 systemd-logind[1899]: Removed session 22. May 8 00:14:38.604283 systemd[1]: Started sshd@22-172.31.16.158:22-139.178.68.195:40938.service - OpenSSH per-connection server daemon (139.178.68.195:40938). May 8 00:14:38.791114 sshd[5093]: Accepted publickey for core from 139.178.68.195 port 40938 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:38.792751 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:38.797997 systemd-logind[1899]: New session 23 of user core. May 8 00:14:38.802045 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:14:39.033506 sshd[5095]: Connection closed by 139.178.68.195 port 40938 May 8 00:14:39.034163 sshd-session[5093]: pam_unix(sshd:session): session closed for user core May 8 00:14:39.039579 systemd[1]: sshd@22-172.31.16.158:22-139.178.68.195:40938.service: Deactivated successfully. May 8 00:14:39.042187 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:14:39.043418 systemd-logind[1899]: Session 23 logged out. Waiting for processes to exit. May 8 00:14:39.044689 systemd-logind[1899]: Removed session 23. May 8 00:14:44.075230 systemd[1]: Started sshd@23-172.31.16.158:22-139.178.68.195:40954.service - OpenSSH per-connection server daemon (139.178.68.195:40954). May 8 00:14:44.251346 sshd[5133]: Accepted publickey for core from 139.178.68.195 port 40954 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:44.253054 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:44.261329 systemd-logind[1899]: New session 24 of user core. May 8 00:14:44.266016 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:14:44.553946 sshd[5135]: Connection closed by 139.178.68.195 port 40954 May 8 00:14:44.554568 sshd-session[5133]: pam_unix(sshd:session): session closed for user core May 8 00:14:44.557518 systemd[1]: sshd@23-172.31.16.158:22-139.178.68.195:40954.service: Deactivated successfully. May 8 00:14:44.559555 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:14:44.562054 systemd-logind[1899]: Session 24 logged out. Waiting for processes to exit. May 8 00:14:44.563208 systemd-logind[1899]: Removed session 24. May 8 00:14:46.421446 systemd[1]: Created slice kubepods-besteffort-pod6cc5c468_e114_42e3_9c1f_f6f05c534c12.slice - libcontainer container kubepods-besteffort-pod6cc5c468_e114_42e3_9c1f_f6f05c534c12.slice. May 8 00:14:46.543490 kubelet[3169]: I0508 00:14:46.541862 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6cc5c468-e114-42e3-9c1f-f6f05c534c12-typha-certs\") pod \"calico-typha-599dbc9d76-xq8l5\" (UID: \"6cc5c468-e114-42e3-9c1f-f6f05c534c12\") " pod="calico-system/calico-typha-599dbc9d76-xq8l5" May 8 00:14:46.543490 kubelet[3169]: I0508 00:14:46.541936 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krmk\" (UniqueName: \"kubernetes.io/projected/6cc5c468-e114-42e3-9c1f-f6f05c534c12-kube-api-access-4krmk\") pod \"calico-typha-599dbc9d76-xq8l5\" (UID: \"6cc5c468-e114-42e3-9c1f-f6f05c534c12\") " pod="calico-system/calico-typha-599dbc9d76-xq8l5" May 8 00:14:46.543490 kubelet[3169]: I0508 00:14:46.542001 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc5c468-e114-42e3-9c1f-f6f05c534c12-tigera-ca-bundle\") pod \"calico-typha-599dbc9d76-xq8l5\" (UID: \"6cc5c468-e114-42e3-9c1f-f6f05c534c12\") " pod="calico-system/calico-typha-599dbc9d76-xq8l5" May 8 00:14:46.732319 containerd[1915]: time="2025-05-08T00:14:46.731965095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599dbc9d76-xq8l5,Uid:6cc5c468-e114-42e3-9c1f-f6f05c534c12,Namespace:calico-system,Attempt:0,}" May 8 00:14:46.798450 containerd[1915]: time="2025-05-08T00:14:46.797141942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:46.798659 containerd[1915]: time="2025-05-08T00:14:46.798371550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:46.801089 containerd[1915]: time="2025-05-08T00:14:46.800844402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:46.801089 containerd[1915]: time="2025-05-08T00:14:46.800999797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:46.860087 containerd[1915]: time="2025-05-08T00:14:46.859506059Z" level=info msg="StopContainer for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" with timeout 5 (s)" May 8 00:14:46.859918 systemd[1]: Started cri-containerd-ed7491dbaa43ba0e52d42a799f0b8aac5af33da18fe982ef5fd16a4495469a8d.scope - libcontainer container ed7491dbaa43ba0e52d42a799f0b8aac5af33da18fe982ef5fd16a4495469a8d. May 8 00:14:46.872372 containerd[1915]: time="2025-05-08T00:14:46.872330198Z" level=info msg="Stop container \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" with signal terminated" May 8 00:14:46.920895 systemd[1]: cri-containerd-5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa.scope: Deactivated successfully. May 8 00:14:46.921859 systemd[1]: cri-containerd-5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa.scope: Consumed 2.240s CPU time, 163M memory peak, 14.4M read from disk, 624K written to disk. May 8 00:14:46.973010 containerd[1915]: time="2025-05-08T00:14:46.968018629Z" level=info msg="shim disconnected" id=5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa namespace=k8s.io May 8 00:14:46.973010 containerd[1915]: time="2025-05-08T00:14:46.972092283Z" level=warning msg="cleaning up after shim disconnected" id=5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa namespace=k8s.io May 8 00:14:46.973010 containerd[1915]: time="2025-05-08T00:14:46.972114603Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:47.000720 containerd[1915]: time="2025-05-08T00:14:47.000600779Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:14:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:14:47.035968 containerd[1915]: time="2025-05-08T00:14:47.035894713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599dbc9d76-xq8l5,Uid:6cc5c468-e114-42e3-9c1f-f6f05c534c12,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed7491dbaa43ba0e52d42a799f0b8aac5af33da18fe982ef5fd16a4495469a8d\"" May 8 00:14:47.039794 containerd[1915]: time="2025-05-08T00:14:47.039517640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:14:47.051767 containerd[1915]: time="2025-05-08T00:14:47.051645931Z" level=info msg="StopContainer for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" returns successfully" May 8 00:14:47.052408 containerd[1915]: time="2025-05-08T00:14:47.052375052Z" level=info msg="StopPodSandbox for \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\"" May 8 00:14:47.067085 containerd[1915]: time="2025-05-08T00:14:47.066728266Z" level=info msg="Container to stop \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:14:47.070073 containerd[1915]: time="2025-05-08T00:14:47.067867538Z" level=info msg="Container to stop \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:14:47.070073 containerd[1915]: time="2025-05-08T00:14:47.067901907Z" level=info msg="Container to stop \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:14:47.082618 systemd[1]: cri-containerd-e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d.scope: Deactivated successfully. May 8 00:14:47.135612 containerd[1915]: time="2025-05-08T00:14:47.135548626Z" level=info msg="shim disconnected" id=e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d namespace=k8s.io May 8 00:14:47.136123 containerd[1915]: time="2025-05-08T00:14:47.135920175Z" level=warning msg="cleaning up after shim disconnected" id=e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d namespace=k8s.io May 8 00:14:47.136123 containerd[1915]: time="2025-05-08T00:14:47.135941986Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:47.178456 containerd[1915]: time="2025-05-08T00:14:47.178410627Z" level=info msg="TearDown network for sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" successfully" May 8 00:14:47.178456 containerd[1915]: time="2025-05-08T00:14:47.178441140Z" level=info msg="StopPodSandbox for \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" returns successfully" May 8 00:14:47.259905 kubelet[3169]: I0508 00:14:47.259577 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-policysync\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.259905 kubelet[3169]: I0508 00:14:47.259652 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-log-dir\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.259905 kubelet[3169]: I0508 00:14:47.259684 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-tigera-ca-bundle\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.259905 kubelet[3169]: I0508 00:14:47.259715 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-lib-calico\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.259905 kubelet[3169]: I0508 00:14:47.259738 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-xtables-lock\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.259905 kubelet[3169]: I0508 00:14:47.259769 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp84t\" (UniqueName: \"kubernetes.io/projected/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-kube-api-access-cp84t\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.260291 kubelet[3169]: I0508 00:14:47.259797 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-node-certs\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.260291 kubelet[3169]: I0508 00:14:47.259830 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-lib-modules\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.260291 kubelet[3169]: I0508 00:14:47.259853 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-flexvol-driver-host\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.260291 kubelet[3169]: I0508 00:14:47.259880 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-net-dir\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.260291 kubelet[3169]: I0508 00:14:47.259903 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-run-calico\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.260291 kubelet[3169]: I0508 00:14:47.259925 3169 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-bin-dir\") pod \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\" (UID: \"4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7\") " May 8 00:14:47.272846 kubelet[3169]: I0508 00:14:47.272408 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-kube-api-access-cp84t" (OuterVolumeSpecName: "kube-api-access-cp84t") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "kube-api-access-cp84t". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:14:47.272846 kubelet[3169]: I0508 00:14:47.272504 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-policysync" (OuterVolumeSpecName: "policysync") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.272846 kubelet[3169]: I0508 00:14:47.272538 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.283845 kubelet[3169]: I0508 00:14:47.282376 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-node-certs" (OuterVolumeSpecName: "node-certs") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:14:47.283845 kubelet[3169]: I0508 00:14:47.282464 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.283845 kubelet[3169]: I0508 00:14:47.282490 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.283845 kubelet[3169]: I0508 00:14:47.260026 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.283845 kubelet[3169]: I0508 00:14:47.282528 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.284182 kubelet[3169]: I0508 00:14:47.282550 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.284766 kubelet[3169]: I0508 00:14:47.284731 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.285551 kubelet[3169]: I0508 00:14:47.285181 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:14:47.286933 kubelet[3169]: I0508 00:14:47.286410 3169 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" (UID: "4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:14:47.336839 kubelet[3169]: I0508 00:14:47.336448 3169 memory_manager.go:355] "RemoveStaleState removing state" podUID="4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" containerName="calico-node" May 8 00:14:47.345704 systemd[1]: Created slice kubepods-besteffort-podff4b5c21_46e2_46bf_8dcf_c544e86fa23e.slice - libcontainer container kubepods-besteffort-podff4b5c21_46e2_46bf_8dcf_c544e86fa23e.slice. May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360633 3169 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cp84t\" (UniqueName: \"kubernetes.io/projected/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-kube-api-access-cp84t\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360668 3169 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-node-certs\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360690 3169 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-lib-modules\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360699 3169 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-flexvol-driver-host\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360713 3169 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-net-dir\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360721 3169 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-run-calico\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360730 3169 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-bin-dir\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.360765 kubelet[3169]: I0508 00:14:47.360739 3169 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-policysync\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.361176 kubelet[3169]: I0508 00:14:47.360847 3169 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-cni-log-dir\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.361176 kubelet[3169]: I0508 00:14:47.360862 3169 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-tigera-ca-bundle\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.361176 kubelet[3169]: I0508 00:14:47.360870 3169 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-var-lib-calico\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.361176 kubelet[3169]: I0508 00:14:47.360879 3169 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7-xtables-lock\") on node \"ip-172-31-16-158\" DevicePath \"\"" May 8 00:14:47.461880 kubelet[3169]: I0508 00:14:47.461545 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-cni-net-dir\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.461880 kubelet[3169]: I0508 00:14:47.461589 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-flexvol-driver-host\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.461880 kubelet[3169]: I0508 00:14:47.461651 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-tigera-ca-bundle\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.461880 kubelet[3169]: I0508 00:14:47.461684 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-var-run-calico\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.461880 kubelet[3169]: I0508 00:14:47.461702 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-cni-bin-dir\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.462204 kubelet[3169]: I0508 00:14:47.461717 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-cni-log-dir\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.462204 kubelet[3169]: I0508 00:14:47.461748 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-lib-modules\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.462204 kubelet[3169]: I0508 00:14:47.461771 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-policysync\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.462204 kubelet[3169]: I0508 00:14:47.461791 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-var-lib-calico\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.462204 kubelet[3169]: I0508 00:14:47.461827 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-node-certs\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.470870 kubelet[3169]: I0508 00:14:47.461877 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8z5\" (UniqueName: \"kubernetes.io/projected/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-kube-api-access-fn8z5\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.471005 kubelet[3169]: I0508 00:14:47.470912 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff4b5c21-46e2-46bf-8dcf-c544e86fa23e-xtables-lock\") pod \"calico-node-27nlw\" (UID: \"ff4b5c21-46e2-46bf-8dcf-c544e86fa23e\") " pod="calico-system/calico-node-27nlw" May 8 00:14:47.589215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa-rootfs.mount: Deactivated successfully. May 8 00:14:47.589361 systemd[1]: var-lib-kubelet-pods-4b0e2cd4\x2d769b\x2d4e5f\x2d9e77\x2dab11e14d99c7-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 8 00:14:47.589460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d-rootfs.mount: Deactivated successfully. May 8 00:14:47.589551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d-shm.mount: Deactivated successfully. May 8 00:14:47.589645 systemd[1]: var-lib-kubelet-pods-4b0e2cd4\x2d769b\x2d4e5f\x2d9e77\x2dab11e14d99c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcp84t.mount: Deactivated successfully. May 8 00:14:47.589731 systemd[1]: var-lib-kubelet-pods-4b0e2cd4\x2d769b\x2d4e5f\x2d9e77\x2dab11e14d99c7-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 8 00:14:47.663717 kubelet[3169]: I0508 00:14:47.660363 3169 scope.go:117] "RemoveContainer" containerID="5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa" May 8 00:14:47.664158 containerd[1915]: time="2025-05-08T00:14:47.662702663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27nlw,Uid:ff4b5c21-46e2-46bf-8dcf-c544e86fa23e,Namespace:calico-system,Attempt:0,}" May 8 00:14:47.665552 systemd[1]: Removed slice kubepods-besteffort-pod4b0e2cd4_769b_4e5f_9e77_ab11e14d99c7.slice - libcontainer container kubepods-besteffort-pod4b0e2cd4_769b_4e5f_9e77_ab11e14d99c7.slice. May 8 00:14:47.665724 systemd[1]: kubepods-besteffort-pod4b0e2cd4_769b_4e5f_9e77_ab11e14d99c7.slice: Consumed 2.875s CPU time, 205.9M memory peak, 20M read from disk, 161.1M written to disk. May 8 00:14:47.682344 containerd[1915]: time="2025-05-08T00:14:47.680775794Z" level=info msg="RemoveContainer for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\"" May 8 00:14:47.709465 containerd[1915]: time="2025-05-08T00:14:47.709414404Z" level=info msg="RemoveContainer for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" returns successfully" May 8 00:14:47.722496 kubelet[3169]: I0508 00:14:47.721927 3169 scope.go:117] "RemoveContainer" containerID="c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab" May 8 00:14:47.726594 containerd[1915]: time="2025-05-08T00:14:47.726192556Z" level=info msg="RemoveContainer for \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\"" May 8 00:14:47.748478 containerd[1915]: time="2025-05-08T00:14:47.748387036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:47.749432 containerd[1915]: time="2025-05-08T00:14:47.748597847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:47.749647 containerd[1915]: time="2025-05-08T00:14:47.749030903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:47.749647 containerd[1915]: time="2025-05-08T00:14:47.749152807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:47.757019 containerd[1915]: time="2025-05-08T00:14:47.756867784Z" level=info msg="RemoveContainer for \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\" returns successfully" May 8 00:14:47.766278 kubelet[3169]: I0508 00:14:47.766253 3169 scope.go:117] "RemoveContainer" containerID="14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c" May 8 00:14:47.769520 containerd[1915]: time="2025-05-08T00:14:47.769350629Z" level=info msg="RemoveContainer for \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\"" May 8 00:14:47.780866 containerd[1915]: time="2025-05-08T00:14:47.779423384Z" level=info msg="RemoveContainer for \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\" returns successfully" May 8 00:14:47.781714 kubelet[3169]: I0508 00:14:47.781590 3169 scope.go:117] "RemoveContainer" containerID="5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa" May 8 00:14:47.783449 containerd[1915]: time="2025-05-08T00:14:47.783219238Z" level=error msg="ContainerStatus for \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\": not found" May 8 00:14:47.784430 kubelet[3169]: E0508 00:14:47.784277 3169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\": not found" containerID="5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa" May 8 00:14:47.797006 kubelet[3169]: I0508 00:14:47.784598 3169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa"} err="failed to get container status \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"5702869af5d0ab73cb494ed92d12e6b10e77131a04705ab13e1bde7ad70792aa\": not found" May 8 00:14:47.797006 kubelet[3169]: I0508 00:14:47.796938 3169 scope.go:117] "RemoveContainer" containerID="c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab" May 8 00:14:47.797499 containerd[1915]: time="2025-05-08T00:14:47.797412468Z" level=error msg="ContainerStatus for \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\": not found" May 8 00:14:47.797824 kubelet[3169]: E0508 00:14:47.797699 3169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\": not found" containerID="c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab" May 8 00:14:47.797824 kubelet[3169]: I0508 00:14:47.797734 3169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab"} err="failed to get container status \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\": rpc error: code = NotFound desc = an error occurred when try to find container \"c28e260e402e5ab5164fe9b38a512017544b58b09482d01f48101d853a4e3cab\": not found" May 8 00:14:47.797824 kubelet[3169]: I0508 00:14:47.797758 3169 scope.go:117] "RemoveContainer" containerID="14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c" May 8 00:14:47.798385 containerd[1915]: time="2025-05-08T00:14:47.798068209Z" level=error msg="ContainerStatus for \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\": not found" May 8 00:14:47.798193 systemd[1]: Started cri-containerd-5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d.scope - libcontainer container 5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d. May 8 00:14:47.798683 kubelet[3169]: E0508 00:14:47.798226 3169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\": not found" containerID="14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c" May 8 00:14:47.798683 kubelet[3169]: I0508 00:14:47.798602 3169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c"} err="failed to get container status \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\": rpc error: code = NotFound desc = an error occurred when try to find container \"14c6f2e645c4e7f78805c9a59fcfd047e72222c58f3845a17ad2a67cd1d2564c\": not found" May 8 00:14:47.849097 containerd[1915]: time="2025-05-08T00:14:47.848167254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27nlw,Uid:ff4b5c21-46e2-46bf-8dcf-c544e86fa23e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\"" May 8 00:14:47.856581 containerd[1915]: time="2025-05-08T00:14:47.856345507Z" level=info msg="CreateContainer within sandbox \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:14:47.888082 containerd[1915]: time="2025-05-08T00:14:47.887536519Z" level=info msg="CreateContainer within sandbox \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146\"" May 8 00:14:47.888401 containerd[1915]: time="2025-05-08T00:14:47.888356713Z" level=info msg="StartContainer for \"7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146\"" May 8 00:14:47.924031 systemd[1]: Started cri-containerd-7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146.scope - libcontainer container 7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146. May 8 00:14:47.970214 containerd[1915]: time="2025-05-08T00:14:47.970155682Z" level=info msg="StartContainer for \"7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146\" returns successfully" May 8 00:14:48.408068 systemd[1]: cri-containerd-7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146.scope: Deactivated successfully. May 8 00:14:48.408984 systemd[1]: cri-containerd-7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146.scope: Consumed 39ms CPU time, 17.8M memory peak, 10M read from disk, 6.3M written to disk. May 8 00:14:48.538806 containerd[1915]: time="2025-05-08T00:14:48.538715838Z" level=info msg="shim disconnected" id=7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146 namespace=k8s.io May 8 00:14:48.538806 containerd[1915]: time="2025-05-08T00:14:48.538802253Z" level=warning msg="cleaning up after shim disconnected" id=7592be4b36872c64110d5400201fc7c0bf60d9559518a9a3237710009221a146 namespace=k8s.io May 8 00:14:48.538806 containerd[1915]: time="2025-05-08T00:14:48.538819924Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:48.693637 containerd[1915]: time="2025-05-08T00:14:48.693368228Z" level=info msg="CreateContainer within sandbox \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:14:48.727798 containerd[1915]: time="2025-05-08T00:14:48.727640679Z" level=info msg="CreateContainer within sandbox \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590\"" May 8 00:14:48.730249 containerd[1915]: time="2025-05-08T00:14:48.730200803Z" level=info msg="StartContainer for \"82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590\"" May 8 00:14:48.799039 systemd[1]: Started cri-containerd-82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590.scope - libcontainer container 82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590. May 8 00:14:48.885552 containerd[1915]: time="2025-05-08T00:14:48.885508995Z" level=info msg="StartContainer for \"82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590\" returns successfully" May 8 00:14:49.163870 kubelet[3169]: I0508 00:14:49.163547 3169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7" path="/var/lib/kubelet/pods/4b0e2cd4-769b-4e5f-9e77-ab11e14d99c7/volumes" May 8 00:14:49.600892 systemd[1]: Started sshd@24-172.31.16.158:22-139.178.68.195:53856.service - OpenSSH per-connection server daemon (139.178.68.195:53856). May 8 00:14:49.803112 containerd[1915]: time="2025-05-08T00:14:49.803068224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:49.805088 containerd[1915]: time="2025-05-08T00:14:49.804988021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:14:49.806012 containerd[1915]: time="2025-05-08T00:14:49.805952645Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:49.811117 sshd[5437]: Accepted publickey for core from 139.178.68.195 port 53856 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:49.811453 containerd[1915]: time="2025-05-08T00:14:49.810848422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:49.812350 containerd[1915]: time="2025-05-08T00:14:49.811542803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.771740208s" May 8 00:14:49.812350 containerd[1915]: time="2025-05-08T00:14:49.811572982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:14:49.813747 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:49.830487 containerd[1915]: time="2025-05-08T00:14:49.830293816Z" level=info msg="CreateContainer within sandbox \"ed7491dbaa43ba0e52d42a799f0b8aac5af33da18fe982ef5fd16a4495469a8d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:14:49.843833 containerd[1915]: time="2025-05-08T00:14:49.843342051Z" level=info msg="CreateContainer within sandbox \"ed7491dbaa43ba0e52d42a799f0b8aac5af33da18fe982ef5fd16a4495469a8d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d0425cc277dc62eac3a9099967a7d9e02a002483538a74cbf02fea6bbad352b8\"" May 8 00:14:49.847845 containerd[1915]: time="2025-05-08T00:14:49.847798596Z" level=info msg="StartContainer for \"d0425cc277dc62eac3a9099967a7d9e02a002483538a74cbf02fea6bbad352b8\"" May 8 00:14:49.853887 systemd-logind[1899]: New session 25 of user core. May 8 00:14:49.859018 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:14:49.921179 systemd[1]: Started cri-containerd-d0425cc277dc62eac3a9099967a7d9e02a002483538a74cbf02fea6bbad352b8.scope - libcontainer container d0425cc277dc62eac3a9099967a7d9e02a002483538a74cbf02fea6bbad352b8. May 8 00:14:50.034382 containerd[1915]: time="2025-05-08T00:14:50.034336234Z" level=info msg="StartContainer for \"d0425cc277dc62eac3a9099967a7d9e02a002483538a74cbf02fea6bbad352b8\" returns successfully" May 8 00:14:50.652937 sshd[5449]: Connection closed by 139.178.68.195 port 53856 May 8 00:14:50.654910 sshd-session[5437]: pam_unix(sshd:session): session closed for user core May 8 00:14:50.663424 systemd[1]: sshd@24-172.31.16.158:22-139.178.68.195:53856.service: Deactivated successfully. May 8 00:14:50.666036 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:14:50.667460 systemd-logind[1899]: Session 25 logged out. Waiting for processes to exit. May 8 00:14:50.670581 systemd-logind[1899]: Removed session 25. May 8 00:14:50.720304 kubelet[3169]: I0508 00:14:50.720241 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599dbc9d76-xq8l5" podStartSLOduration=1.945705837 podStartE2EDuration="4.72022353s" podCreationTimestamp="2025-05-08 00:14:46 +0000 UTC" firstStartedPulling="2025-05-08 00:14:47.038624016 +0000 UTC m=+100.034847884" lastFinishedPulling="2025-05-08 00:14:49.813141709 +0000 UTC m=+102.809365577" observedRunningTime="2025-05-08 00:14:50.718964489 +0000 UTC m=+103.715188376" watchObservedRunningTime="2025-05-08 00:14:50.72022353 +0000 UTC m=+103.716447416" May 8 00:14:50.831048 systemd[1]: run-containerd-runc-k8s.io-d0425cc277dc62eac3a9099967a7d9e02a002483538a74cbf02fea6bbad352b8-runc.Ng2ln2.mount: Deactivated successfully. May 8 00:14:55.695162 systemd[1]: Started sshd@25-172.31.16.158:22-139.178.68.195:45682.service - OpenSSH per-connection server daemon (139.178.68.195:45682). May 8 00:14:55.899542 sshd[5501]: Accepted publickey for core from 139.178.68.195 port 45682 ssh2: RSA SHA256:KzzWn6O+Z3VZj7W5xu29TBqYrCKq78VLDb+pogeWJHY May 8 00:14:55.925502 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:55.931308 systemd-logind[1899]: New session 26 of user core. May 8 00:14:55.934020 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:14:56.549183 sshd[5508]: Connection closed by 139.178.68.195 port 45682 May 8 00:14:56.550042 sshd-session[5501]: pam_unix(sshd:session): session closed for user core May 8 00:14:56.554039 systemd[1]: sshd@25-172.31.16.158:22-139.178.68.195:45682.service: Deactivated successfully. May 8 00:14:56.556243 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:14:56.558296 systemd-logind[1899]: Session 26 logged out. Waiting for processes to exit. May 8 00:14:56.559422 systemd-logind[1899]: Removed session 26. May 8 00:14:58.671765 systemd[1]: cri-containerd-82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590.scope: Deactivated successfully. May 8 00:14:58.672153 systemd[1]: cri-containerd-82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590.scope: Consumed 1.027s CPU time, 268.5M memory peak, 292.1M read from disk. May 8 00:14:58.769242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590-rootfs.mount: Deactivated successfully. May 8 00:14:58.793951 containerd[1915]: time="2025-05-08T00:14:58.785494947Z" level=info msg="shim disconnected" id=82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590 namespace=k8s.io May 8 00:14:58.793951 containerd[1915]: time="2025-05-08T00:14:58.785704981Z" level=warning msg="cleaning up after shim disconnected" id=82c740d1181a9f25424a30a368dcf44baa00a390e30315a7619e962a0ab71590 namespace=k8s.io May 8 00:14:58.793951 containerd[1915]: time="2025-05-08T00:14:58.785714964Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:59.750890 containerd[1915]: time="2025-05-08T00:14:59.749660574Z" level=info msg="CreateContainer within sandbox \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:14:59.818723 containerd[1915]: time="2025-05-08T00:14:59.818672890Z" level=info msg="CreateContainer within sandbox \"5b74499d0a0262d957809a0848a9032caa512f42ed3c0bf8e21acb1bfcbcd24d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"47a2e4d428c053dec3b54b573f64a3118f353aa02b1bc00855b98194e864d53a\"" May 8 00:14:59.821050 containerd[1915]: time="2025-05-08T00:14:59.820137577Z" level=info msg="StartContainer for \"47a2e4d428c053dec3b54b573f64a3118f353aa02b1bc00855b98194e864d53a\"" May 8 00:14:59.856991 systemd[1]: Started cri-containerd-47a2e4d428c053dec3b54b573f64a3118f353aa02b1bc00855b98194e864d53a.scope - libcontainer container 47a2e4d428c053dec3b54b573f64a3118f353aa02b1bc00855b98194e864d53a. May 8 00:14:59.897999 containerd[1915]: time="2025-05-08T00:14:59.897957910Z" level=info msg="StartContainer for \"47a2e4d428c053dec3b54b573f64a3118f353aa02b1bc00855b98194e864d53a\" returns successfully" May 8 00:15:07.340004 containerd[1915]: time="2025-05-08T00:15:07.339762895Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:15:07.340004 containerd[1915]: time="2025-05-08T00:15:07.339922855Z" level=info msg="TearDown network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" successfully" May 8 00:15:07.340004 containerd[1915]: time="2025-05-08T00:15:07.339936152Z" level=info msg="StopPodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" returns successfully" May 8 00:15:07.341063 containerd[1915]: time="2025-05-08T00:15:07.340259253Z" level=info msg="RemovePodSandbox for \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:15:07.349393 containerd[1915]: time="2025-05-08T00:15:07.349315091Z" level=info msg="Forcibly stopping sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\"" May 8 00:15:07.349545 containerd[1915]: time="2025-05-08T00:15:07.349456314Z" level=info msg="TearDown network for sandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" successfully" May 8 00:15:07.363077 containerd[1915]: time="2025-05-08T00:15:07.363004164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:15:07.363077 containerd[1915]: time="2025-05-08T00:15:07.363072832Z" level=info msg="RemovePodSandbox \"025da8a645b63f58f16b3cac5c0286f626854588902dbcb318f5cd202782e842\" returns successfully" May 8 00:15:07.364095 containerd[1915]: time="2025-05-08T00:15:07.363616946Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" May 8 00:15:07.364095 containerd[1915]: time="2025-05-08T00:15:07.363715248Z" level=info msg="TearDown network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" successfully" May 8 00:15:07.364095 containerd[1915]: time="2025-05-08T00:15:07.363725978Z" level=info msg="StopPodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" returns successfully" May 8 00:15:07.364182 containerd[1915]: time="2025-05-08T00:15:07.364090151Z" level=info msg="RemovePodSandbox for \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" May 8 00:15:07.364182 containerd[1915]: time="2025-05-08T00:15:07.364114334Z" level=info msg="Forcibly stopping sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\"" May 8 00:15:07.364233 containerd[1915]: time="2025-05-08T00:15:07.364192288Z" level=info msg="TearDown network for sandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" successfully" May 8 00:15:07.369877 containerd[1915]: time="2025-05-08T00:15:07.369829000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:15:07.370051 containerd[1915]: time="2025-05-08T00:15:07.369889029Z" level=info msg="RemovePodSandbox \"0d1e3762c5e3c7035a5e265ab340c7de72ec20da3f3d5ff816fd17f4f0909b7c\" returns successfully" May 8 00:15:07.370327 containerd[1915]: time="2025-05-08T00:15:07.370301031Z" level=info msg="StopPodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\"" May 8 00:15:07.370408 containerd[1915]: time="2025-05-08T00:15:07.370395420Z" level=info msg="TearDown network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" successfully" May 8 00:15:07.370441 containerd[1915]: time="2025-05-08T00:15:07.370406360Z" level=info msg="StopPodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" returns successfully" May 8 00:15:07.370894 containerd[1915]: time="2025-05-08T00:15:07.370863745Z" level=info msg="RemovePodSandbox for \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\"" May 8 00:15:07.370966 containerd[1915]: time="2025-05-08T00:15:07.370895769Z" level=info msg="Forcibly stopping sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\"" May 8 00:15:07.371009 containerd[1915]: time="2025-05-08T00:15:07.370970241Z" level=info msg="TearDown network for sandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" successfully" May 8 00:15:07.376215 containerd[1915]: time="2025-05-08T00:15:07.376019789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:15:07.376215 containerd[1915]: time="2025-05-08T00:15:07.376102706Z" level=info msg="RemovePodSandbox \"d17b6c67d3d07a85458746b5b5d805908d998c3a3968fa68a82ec5156d14a9fb\" returns successfully" May 8 00:15:07.376943 containerd[1915]: time="2025-05-08T00:15:07.376586373Z" level=info msg="StopPodSandbox for \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\"" May 8 00:15:07.376943 containerd[1915]: time="2025-05-08T00:15:07.376768360Z" level=info msg="TearDown network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\" successfully" May 8 00:15:07.376943 containerd[1915]: time="2025-05-08T00:15:07.376785866Z" level=info msg="StopPodSandbox for \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\" returns successfully" May 8 00:15:07.377216 containerd[1915]: time="2025-05-08T00:15:07.377187499Z" level=info msg="RemovePodSandbox for \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\"" May 8 00:15:07.377290 containerd[1915]: time="2025-05-08T00:15:07.377217065Z" level=info msg="Forcibly stopping sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\"" May 8 00:15:07.377441 containerd[1915]: time="2025-05-08T00:15:07.377323939Z" level=info msg="TearDown network for sandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\" successfully" May 8 00:15:07.382344 containerd[1915]: time="2025-05-08T00:15:07.382301077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:15:07.382470 containerd[1915]: time="2025-05-08T00:15:07.382370045Z" level=info msg="RemovePodSandbox \"203f1abbcc88a38a71a49d46b70be67c248f5eb98596d9b661e4d568f3e2d868\" returns successfully" May 8 00:15:07.382960 containerd[1915]: time="2025-05-08T00:15:07.382929721Z" level=info msg="StopPodSandbox for \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\"" May 8 00:15:07.393263 containerd[1915]: time="2025-05-08T00:15:07.393209323Z" level=info msg="TearDown network for sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" successfully" May 8 00:15:07.393263 containerd[1915]: time="2025-05-08T00:15:07.393247730Z" level=info msg="StopPodSandbox for \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" returns successfully" May 8 00:15:07.394483 containerd[1915]: time="2025-05-08T00:15:07.393833602Z" level=info msg="RemovePodSandbox for \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\"" May 8 00:15:07.394483 containerd[1915]: time="2025-05-08T00:15:07.393866557Z" level=info msg="Forcibly stopping sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\"" May 8 00:15:07.394483 containerd[1915]: time="2025-05-08T00:15:07.393921026Z" level=info msg="TearDown network for sandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" successfully" May 8 00:15:07.399938 containerd[1915]: time="2025-05-08T00:15:07.399885860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:15:07.400096 containerd[1915]: time="2025-05-08T00:15:07.399965234Z" level=info msg="RemovePodSandbox \"e69de7e8d23ddc780eb48e3c2a2574229691227163f580ac5937ca29ef9e6f0d\" returns successfully" May 8 00:15:10.781699 systemd[1]: cri-containerd-ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7.scope: Deactivated successfully. May 8 00:15:10.783026 systemd[1]: cri-containerd-ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7.scope: Consumed 3.594s CPU time, 88.5M memory peak, 50.2M read from disk. May 8 00:15:10.815498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7-rootfs.mount: Deactivated successfully. May 8 00:15:10.815926 containerd[1915]: time="2025-05-08T00:15:10.815881393Z" level=info msg="shim disconnected" id=ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7 namespace=k8s.io May 8 00:15:10.816348 containerd[1915]: time="2025-05-08T00:15:10.816302267Z" level=warning msg="cleaning up after shim disconnected" id=ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7 namespace=k8s.io May 8 00:15:10.816419 containerd[1915]: time="2025-05-08T00:15:10.816407391Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:10.830481 containerd[1915]: time="2025-05-08T00:15:10.830399514Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:15:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:15:11.786854 kubelet[3169]: I0508 00:15:11.786777 3169 scope.go:117] "RemoveContainer" containerID="ec0813b5f02cea9626e1074ba3c09f8a10ea7cb8488c6ffa13562280ea296be7" May 8 00:15:11.794767 containerd[1915]: time="2025-05-08T00:15:11.794729379Z" level=info msg="CreateContainer within sandbox \"3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 8 00:15:11.837503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966604341.mount: Deactivated successfully. May 8 00:15:11.842865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount326580036.mount: Deactivated successfully. May 8 00:15:11.853721 containerd[1915]: time="2025-05-08T00:15:11.853654234Z" level=info msg="CreateContainer within sandbox \"3c83a83ccb5ed07460f38cd7defde497a1c626a73da9b640346752376f863363\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1f6a5e2165b67d4425db550db5b20888fc80b156c5c6843c5303c5fe47445f2a\"" May 8 00:15:11.854259 containerd[1915]: time="2025-05-08T00:15:11.854230429Z" level=info msg="StartContainer for \"1f6a5e2165b67d4425db550db5b20888fc80b156c5c6843c5303c5fe47445f2a\"" May 8 00:15:11.894442 systemd[1]: run-containerd-runc-k8s.io-1f6a5e2165b67d4425db550db5b20888fc80b156c5c6843c5303c5fe47445f2a-runc.PmGwP4.mount: Deactivated successfully. May 8 00:15:11.907294 systemd[1]: Started cri-containerd-1f6a5e2165b67d4425db550db5b20888fc80b156c5c6843c5303c5fe47445f2a.scope - libcontainer container 1f6a5e2165b67d4425db550db5b20888fc80b156c5c6843c5303c5fe47445f2a. May 8 00:15:11.960870 containerd[1915]: time="2025-05-08T00:15:11.960703283Z" level=info msg="StartContainer for \"1f6a5e2165b67d4425db550db5b20888fc80b156c5c6843c5303c5fe47445f2a\" returns successfully" May 8 00:15:12.142485 systemd[1]: cri-containerd-466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496.scope: Deactivated successfully. May 8 00:15:12.142770 systemd[1]: cri-containerd-466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496.scope: Consumed 3.216s CPU time, 62.7M memory peak, 26.8M read from disk. May 8 00:15:12.166230 containerd[1915]: time="2025-05-08T00:15:12.166159196Z" level=info msg="shim disconnected" id=466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496 namespace=k8s.io May 8 00:15:12.166230 containerd[1915]: time="2025-05-08T00:15:12.166222853Z" level=warning msg="cleaning up after shim disconnected" id=466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496 namespace=k8s.io May 8 00:15:12.166230 containerd[1915]: time="2025-05-08T00:15:12.166235410Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:12.786383 kubelet[3169]: I0508 00:15:12.786352 3169 scope.go:117] "RemoveContainer" containerID="466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496" May 8 00:15:12.789112 containerd[1915]: time="2025-05-08T00:15:12.789070194Z" level=info msg="CreateContainer within sandbox \"ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 8 00:15:12.808773 containerd[1915]: time="2025-05-08T00:15:12.808552095Z" level=info msg="CreateContainer within sandbox \"ff56a4c4a62c46d9e36a19904e1c25b9e80b5d07c04fe63eb4769fa56db731ae\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cff0a706a91ef6abd0d4b8ceb78bf2489c43b5615116954abc39e7b130827419\"" May 8 00:15:12.809397 containerd[1915]: time="2025-05-08T00:15:12.809371070Z" level=info msg="StartContainer for \"cff0a706a91ef6abd0d4b8ceb78bf2489c43b5615116954abc39e7b130827419\"" May 8 00:15:12.842738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-466d0753fe545df3f7298ed0601cebd6a5643e007c39f085e214d56558b3d496-rootfs.mount: Deactivated successfully. May 8 00:15:12.854995 systemd[1]: Started cri-containerd-cff0a706a91ef6abd0d4b8ceb78bf2489c43b5615116954abc39e7b130827419.scope - libcontainer container cff0a706a91ef6abd0d4b8ceb78bf2489c43b5615116954abc39e7b130827419. May 8 00:15:12.911173 containerd[1915]: time="2025-05-08T00:15:12.911120077Z" level=info msg="StartContainer for \"cff0a706a91ef6abd0d4b8ceb78bf2489c43b5615116954abc39e7b130827419\" returns successfully" May 8 00:15:16.922433 systemd[1]: cri-containerd-dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889.scope: Deactivated successfully. May 8 00:15:16.923223 systemd[1]: cri-containerd-dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889.scope: Consumed 2.464s CPU time, 34.8M memory peak, 24.9M read from disk. May 8 00:15:16.948667 containerd[1915]: time="2025-05-08T00:15:16.948371641Z" level=info msg="shim disconnected" id=dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889 namespace=k8s.io May 8 00:15:16.948667 containerd[1915]: time="2025-05-08T00:15:16.948514498Z" level=warning msg="cleaning up after shim disconnected" id=dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889 namespace=k8s.io May 8 00:15:16.949895 containerd[1915]: time="2025-05-08T00:15:16.949253905Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:16.953313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889-rootfs.mount: Deactivated successfully. May 8 00:15:17.799465 kubelet[3169]: I0508 00:15:17.799424 3169 scope.go:117] "RemoveContainer" containerID="dd85b75b590de0210dbd0bd99416c4961ca7216d70015234708ac41cdb32b889" May 8 00:15:17.802161 containerd[1915]: time="2025-05-08T00:15:17.802110616Z" level=info msg="CreateContainer within sandbox \"8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 8 00:15:17.828703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344615824.mount: Deactivated successfully. May 8 00:15:17.833738 containerd[1915]: time="2025-05-08T00:15:17.833686698Z" level=info msg="CreateContainer within sandbox \"8fe88a58f00aaca9d12b65cb07730efe96156fe91377347780b73b1574ad80e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3ef85180f8ab28343d34f539b642510dd9f43a877984839d900afc93581c26c3\"" May 8 00:15:17.834739 containerd[1915]: time="2025-05-08T00:15:17.834706728Z" level=info msg="StartContainer for \"3ef85180f8ab28343d34f539b642510dd9f43a877984839d900afc93581c26c3\"" May 8 00:15:17.879055 systemd[1]: Started cri-containerd-3ef85180f8ab28343d34f539b642510dd9f43a877984839d900afc93581c26c3.scope - libcontainer container 3ef85180f8ab28343d34f539b642510dd9f43a877984839d900afc93581c26c3. May 8 00:15:17.929890 containerd[1915]: time="2025-05-08T00:15:17.929831852Z" level=info msg="StartContainer for \"3ef85180f8ab28343d34f539b642510dd9f43a877984839d900afc93581c26c3\" returns successfully" May 8 00:15:19.581605 kubelet[3169]: E0508 00:15:19.581522 3169 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 8 00:15:19.762964 kubelet[3169]: E0508 00:15:19.762836 3169 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-05-08T00:15:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-08T00:15:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-08T00:15:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-08T00:15:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"],\\\"sizeBytes\\\":144068610},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.3\\\"],\\\"sizeBytes\\\":99286305},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\\\",\\\"registry.k8s.io/etcd:3.5.16-0\\\"],\\\"sizeBytes\\\":57680541},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.29.3\\\"],\\\"sizeBytes\\\":44514075},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\\\",\\\"ghcr.io/flatcar/calico/typha:v3.29.3\\\"],\\\"sizeBytes\\\":31919484},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\\\",\\\"registry.k8s.io/kube-proxy:v1.32.4\\\"],\\\"sizeBytes\\\":30916875},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\\\",\\\"registry.k8s.io/kube-apiserver:v1.32.4\\\"],\\\"sizeBytes\\\":28679679},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\\\",\\\"registry.k8s.io/kube-controller-manager:v1.32.4\\\"],\\\"sizeBytes\\\":26267962},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\\\",\\\"quay.io/tigera/operator:v1.36.7\\\"],\\\"sizeBytes\\\":21998657},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\\\",\\\"registry.k8s.io/kube-scheduler:v1.32.4\\\"],\\\"sizeBytes\\\":20658329},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\\\",\\\"registry.k8s.io/coredns/coredns:v1.11.3\\\"],\\\"sizeBytes\\\":18562039},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\\\"],\\\"sizeBytes\\\":15484347},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.3\\\"],\\\"sizeBytes\\\":9405520},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\\\"],\\\"sizeBytes\\\":6859519},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"ip-172-31-16-158\": Patch \"https://172.31.16.158:6443/api/v1/nodes/ip-172-31-16-158/status?timeout=10s\": context deadline exceeded" May 8 00:15:29.583094 kubelet[3169]: E0508 00:15:29.582782 3169 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-158?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 8 00:15:29.782407 kubelet[3169]: E0508 00:15:29.782360 3169 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-158\": Get \"https://172.31.16.158:6443/api/v1/nodes/ip-172-31-16-158?timeout=10s\": context deadline exceeded"