May 9 00:15:04.964269 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:21:52 -00 2025 May 9 00:15:04.964410 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:15:04.964429 kernel: BIOS-provided physical RAM map: May 9 00:15:04.964440 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:15:04.964450 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 9 00:15:04.964461 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 9 00:15:04.964472 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 9 00:15:04.964483 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 9 00:15:04.964497 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 9 00:15:04.964516 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 9 00:15:04.964526 kernel: NX (Execute Disable) protection: active May 9 00:15:04.964535 kernel: APIC: Static calls initialized May 9 00:15:04.964546 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 9 00:15:04.964557 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 9 00:15:04.964570 kernel: extended physical RAM map: May 9 00:15:04.964585 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:15:04.964598 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 9 00:15:04.964610 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 9 00:15:04.964621 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 9 00:15:04.964632 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 9 00:15:04.964645 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 9 00:15:04.964657 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 9 00:15:04.964670 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 9 00:15:04.964683 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 9 00:15:04.964696 kernel: efi: EFI v2.7 by EDK II May 9 00:15:04.964707 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 9 00:15:04.964724 kernel: secureboot: Secure boot disabled May 9 00:15:04.964738 kernel: SMBIOS 2.7 present. May 9 00:15:04.964752 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 9 00:15:04.964765 kernel: Hypervisor detected: KVM May 9 00:15:04.964779 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:15:04.964793 kernel: kvm-clock: using sched offset of 5108262698 cycles May 9 00:15:04.964808 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:15:04.964823 kernel: tsc: Detected 2499.994 MHz processor May 9 00:15:04.964837 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:15:04.964851 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:15:04.964868 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 9 00:15:04.964882 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:15:04.964896 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:15:04.964911 kernel: Using GB pages for direct mapping May 9 00:15:04.964931 kernel: ACPI: Early table checksum verification disabled May 9 00:15:04.964946 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 9 00:15:04.964962 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 9 00:15:04.964981 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 00:15:04.964996 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 9 00:15:04.965010 kernel: ACPI: FACS 0x00000000789D0000 000040 May 9 00:15:04.965026 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 9 00:15:04.965041 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 00:15:04.965056 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 00:15:04.965072 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 9 00:15:04.965091 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 9 00:15:04.965105 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 9 00:15:04.965119 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 9 00:15:04.965132 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 9 00:15:04.965145 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 9 00:15:04.965160 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 9 00:15:04.965467 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 9 00:15:04.965483 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 9 00:15:04.965497 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 9 00:15:04.965517 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 9 00:15:04.965531 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 9 00:15:04.965545 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 9 00:15:04.965560 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 9 00:15:04.965575 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 9 00:15:04.965588 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 9 00:15:04.965603 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 9 00:15:04.965617 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 9 00:15:04.965632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 9 00:15:04.965650 kernel: NUMA: Initialized distance table, cnt=1 May 9 00:15:04.965663 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 9 00:15:04.965677 kernel: Zone ranges: May 9 00:15:04.965692 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:15:04.965706 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 9 00:15:04.965720 kernel: Normal empty May 9 00:15:04.965735 kernel: Movable zone start for each node May 9 00:15:04.965749 kernel: Early memory node ranges May 9 00:15:04.965763 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:15:04.965778 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 9 00:15:04.965795 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 9 00:15:04.965809 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 9 00:15:04.965824 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:15:04.965838 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:15:04.965852 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:15:04.965867 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 9 00:15:04.965881 kernel: ACPI: PM-Timer IO Port: 0xb008 May 9 00:15:04.965896 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:15:04.965910 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 9 00:15:04.965927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:15:04.965942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:15:04.965956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:15:04.965970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:15:04.965984 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:15:04.965999 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:15:04.966013 kernel: TSC deadline timer available May 9 00:15:04.966026 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 9 00:15:04.966040 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:15:04.966058 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 9 00:15:04.966070 kernel: Booting paravirtualized kernel on KVM May 9 00:15:04.966083 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:15:04.966097 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 9 00:15:04.966111 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 9 00:15:04.966125 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 9 00:15:04.966139 kernel: pcpu-alloc: [0] 0 1 May 9 00:15:04.966155 kernel: kvm-guest: PV spinlocks enabled May 9 00:15:04.966189 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:15:04.966207 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:15:04.966221 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:15:04.966234 kernel: random: crng init done May 9 00:15:04.966246 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:15:04.966260 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 9 00:15:04.966273 kernel: Fallback order for Node 0: 0 May 9 00:15:04.966286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 9 00:15:04.966299 kernel: Policy zone: DMA32 May 9 00:15:04.966315 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:15:04.966329 kernel: Memory: 1874584K/2037804K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 162964K reserved, 0K cma-reserved) May 9 00:15:04.966342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 00:15:04.966355 kernel: Kernel/User page tables isolation: enabled May 9 00:15:04.966369 kernel: ftrace: allocating 37946 entries in 149 pages May 9 00:15:04.966394 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:15:04.966412 kernel: Dynamic Preempt: voluntary May 9 00:15:04.966426 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:15:04.966442 kernel: rcu: RCU event tracing is enabled. May 9 00:15:04.966456 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 00:15:04.966471 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:15:04.966488 kernel: Rude variant of Tasks RCU enabled. May 9 00:15:04.966503 kernel: Tracing variant of Tasks RCU enabled. May 9 00:15:04.966517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:15:04.966532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 00:15:04.966547 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 9 00:15:04.966562 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:15:04.966580 kernel: Console: colour dummy device 80x25 May 9 00:15:04.966595 kernel: printk: console [tty0] enabled May 9 00:15:04.966610 kernel: printk: console [ttyS0] enabled May 9 00:15:04.966625 kernel: ACPI: Core revision 20230628 May 9 00:15:04.966640 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 9 00:15:04.966655 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:15:04.966670 kernel: x2apic enabled May 9 00:15:04.966684 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:15:04.966700 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns May 9 00:15:04.966718 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) May 9 00:15:04.966734 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 9 00:15:04.966749 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 9 00:15:04.966764 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:15:04.966779 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:15:04.966794 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:15:04.966809 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 9 00:15:04.966825 kernel: RETBleed: Vulnerable May 9 00:15:04.966840 kernel: Speculative Store Bypass: Vulnerable May 9 00:15:04.966855 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 9 00:15:04.966873 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 9 00:15:04.966888 kernel: GDS: Unknown: Dependent on hypervisor status May 9 00:15:04.966903 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:15:04.966918 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:15:04.966933 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:15:04.966949 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 9 00:15:04.966964 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 9 00:15:04.966980 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 9 00:15:04.966995 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 9 00:15:04.967011 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 9 00:15:04.967029 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 9 00:15:04.967044 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:15:04.967060 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 9 00:15:04.967073 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 9 00:15:04.967087 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 9 00:15:04.967101 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 9 00:15:04.967115 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 9 00:15:04.967129 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 9 00:15:04.967143 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 9 00:15:04.967160 kernel: Freeing SMP alternatives memory: 32K May 9 00:15:04.967198 kernel: pid_max: default: 32768 minimum: 301 May 9 00:15:04.967213 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:15:04.967231 kernel: landlock: Up and running. May 9 00:15:04.967245 kernel: SELinux: Initializing. May 9 00:15:04.967261 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 9 00:15:04.967275 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 9 00:15:04.967290 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 9 00:15:04.967304 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:15:04.967319 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:15:04.967334 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:15:04.967348 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 9 00:15:04.967363 kernel: signal: max sigframe size: 3632 May 9 00:15:04.967381 kernel: rcu: Hierarchical SRCU implementation. May 9 00:15:04.967396 kernel: rcu: Max phase no-delay instances is 400. May 9 00:15:04.967412 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 9 00:15:04.967427 kernel: smp: Bringing up secondary CPUs ... May 9 00:15:04.967440 kernel: smpboot: x86: Booting SMP configuration: May 9 00:15:04.967455 kernel: .... node #0, CPUs: #1 May 9 00:15:04.967469 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 9 00:15:04.967485 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 9 00:15:04.967504 kernel: smp: Brought up 1 node, 2 CPUs May 9 00:15:04.967519 kernel: smpboot: Max logical packages: 1 May 9 00:15:04.967534 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) May 9 00:15:04.967549 kernel: devtmpfs: initialized May 9 00:15:04.967565 kernel: x86/mm: Memory block size: 128MB May 9 00:15:04.967581 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 9 00:15:04.967596 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:15:04.967611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 00:15:04.967627 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:15:04.967646 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:15:04.967661 kernel: audit: initializing netlink subsys (disabled) May 9 00:15:04.967676 kernel: audit: type=2000 audit(1746749704.902:1): state=initialized audit_enabled=0 res=1 May 9 00:15:04.967690 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:15:04.967706 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:15:04.967722 kernel: cpuidle: using governor menu May 9 00:15:04.967738 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:15:04.967751 kernel: dca service started, version 1.12.1 May 9 00:15:04.967766 kernel: PCI: Using configuration type 1 for base access May 9 00:15:04.967784 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:15:04.967798 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:15:04.967813 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:15:04.967827 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:15:04.967843 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:15:04.967860 kernel: ACPI: Added _OSI(Module Device) May 9 00:15:04.967876 kernel: ACPI: Added _OSI(Processor Device) May 9 00:15:04.967893 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:15:04.967908 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:15:04.967926 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 9 00:15:04.967939 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:15:04.967954 kernel: ACPI: Interpreter enabled May 9 00:15:04.967967 kernel: ACPI: PM: (supports S0 S5) May 9 00:15:04.967980 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:15:04.967995 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:15:04.968009 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:15:04.968022 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 9 00:15:04.968036 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:15:04.970263 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 9 00:15:04.970472 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 9 00:15:04.970616 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 9 00:15:04.970637 kernel: acpiphp: Slot [3] registered May 9 00:15:04.970652 kernel: acpiphp: Slot [4] registered May 9 00:15:04.970667 kernel: acpiphp: Slot [5] registered May 9 00:15:04.970683 kernel: acpiphp: Slot [6] registered May 9 00:15:04.970698 kernel: acpiphp: Slot [7] registered May 9 00:15:04.970719 kernel: acpiphp: Slot [8] registered May 9 00:15:04.970734 kernel: acpiphp: Slot [9] registered May 9 00:15:04.970750 kernel: acpiphp: Slot [10] registered May 9 00:15:04.970766 kernel: acpiphp: Slot [11] registered May 9 00:15:04.970782 kernel: acpiphp: Slot [12] registered May 9 00:15:04.970797 kernel: acpiphp: Slot [13] registered May 9 00:15:04.970812 kernel: acpiphp: Slot [14] registered May 9 00:15:04.970827 kernel: acpiphp: Slot [15] registered May 9 00:15:04.970843 kernel: acpiphp: Slot [16] registered May 9 00:15:04.970862 kernel: acpiphp: Slot [17] registered May 9 00:15:04.970877 kernel: acpiphp: Slot [18] registered May 9 00:15:04.970893 kernel: acpiphp: Slot [19] registered May 9 00:15:04.970908 kernel: acpiphp: Slot [20] registered May 9 00:15:04.970923 kernel: acpiphp: Slot [21] registered May 9 00:15:04.970938 kernel: acpiphp: Slot [22] registered May 9 00:15:04.970953 kernel: acpiphp: Slot [23] registered May 9 00:15:04.970969 kernel: acpiphp: Slot [24] registered May 9 00:15:04.970984 kernel: acpiphp: Slot [25] registered May 9 00:15:04.970999 kernel: acpiphp: Slot [26] registered May 9 00:15:04.971018 kernel: acpiphp: Slot [27] registered May 9 00:15:04.971033 kernel: acpiphp: Slot [28] registered May 9 00:15:04.971048 kernel: acpiphp: Slot [29] registered May 9 00:15:04.971064 kernel: acpiphp: Slot [30] registered May 9 00:15:04.971078 kernel: acpiphp: Slot [31] registered May 9 00:15:04.971094 kernel: PCI host bridge to bus 0000:00 May 9 00:15:04.971251 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:15:04.971371 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:15:04.971490 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:15:04.971657 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 9 00:15:04.971781 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 9 00:15:04.971900 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:15:04.972065 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 9 00:15:04.972229 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 9 00:15:04.972394 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 9 00:15:04.972532 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 9 00:15:04.972667 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 9 00:15:04.972803 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 9 00:15:04.972939 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 9 00:15:04.973076 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 9 00:15:04.975360 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 9 00:15:04.975535 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 9 00:15:04.975683 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 9 00:15:04.975822 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 9 00:15:04.975959 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:15:04.976095 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 9 00:15:04.977304 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:15:04.977470 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 00:15:04.977618 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 9 00:15:04.977765 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 00:15:04.977901 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 9 00:15:04.977923 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:15:04.977940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:15:04.977956 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:15:04.977973 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:15:04.977990 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 9 00:15:04.978011 kernel: iommu: Default domain type: Translated May 9 00:15:04.978028 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:15:04.978044 kernel: efivars: Registered efivars operations May 9 00:15:04.978060 kernel: PCI: Using ACPI for IRQ routing May 9 00:15:04.978077 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:15:04.978093 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 9 00:15:04.978109 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 9 00:15:04.978125 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 9 00:15:04.981590 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 9 00:15:04.981768 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 9 00:15:04.981908 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:15:04.981930 kernel: vgaarb: loaded May 9 00:15:04.981948 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 9 00:15:04.981965 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 9 00:15:04.981982 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:15:04.981999 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:15:04.982015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:15:04.982036 kernel: pnp: PnP ACPI init May 9 00:15:04.982053 kernel: pnp: PnP ACPI: found 5 devices May 9 00:15:04.982070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:15:04.982088 kernel: NET: Registered PF_INET protocol family May 9 00:15:04.982104 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:15:04.982121 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 9 00:15:04.982138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:15:04.982155 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 9 00:15:04.982184 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 9 00:15:04.982209 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 9 00:15:04.982228 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 9 00:15:04.982248 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 9 00:15:04.982267 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:15:04.982286 kernel: NET: Registered PF_XDP protocol family May 9 00:15:04.982451 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:15:04.982579 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:15:04.982697 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:15:04.982875 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 9 00:15:04.983042 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 9 00:15:04.983255 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 9 00:15:04.983278 kernel: PCI: CLS 0 bytes, default 64 May 9 00:15:04.983296 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 9 00:15:04.983314 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns May 9 00:15:04.983331 kernel: clocksource: Switched to clocksource tsc May 9 00:15:04.983347 kernel: Initialise system trusted keyrings May 9 00:15:04.983364 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 9 00:15:04.983389 kernel: Key type asymmetric registered May 9 00:15:04.983405 kernel: Asymmetric key parser 'x509' registered May 9 00:15:04.983422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:15:04.983440 kernel: io scheduler mq-deadline registered May 9 00:15:04.983456 kernel: io scheduler kyber registered May 9 00:15:04.983473 kernel: io scheduler bfq registered May 9 00:15:04.983490 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:15:04.983506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:15:04.983524 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:15:04.983546 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:15:04.983563 kernel: i8042: Warning: Keylock active May 9 00:15:04.983580 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:15:04.983597 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:15:04.983768 kernel: rtc_cmos 00:00: RTC can wake from S4 May 9 00:15:04.983902 kernel: rtc_cmos 00:00: registered as rtc0 May 9 00:15:04.984028 kernel: rtc_cmos 00:00: setting system clock to 2025-05-09T00:15:04 UTC (1746749704) May 9 00:15:04.984148 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 9 00:15:04.984186 kernel: intel_pstate: CPU model not supported May 9 00:15:04.984203 kernel: efifb: probing for efifb May 9 00:15:04.984218 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 9 00:15:04.984235 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 9 00:15:04.984274 kernel: efifb: scrolling: redraw May 9 00:15:04.984303 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 00:15:04.984320 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:15:04.984338 kernel: fb0: EFI VGA frame buffer device May 9 00:15:04.984357 kernel: pstore: Using crash dump compression: deflate May 9 00:15:04.984371 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:15:04.984385 kernel: NET: Registered PF_INET6 protocol family May 9 00:15:04.984399 kernel: Segment Routing with IPv6 May 9 00:15:04.984414 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:15:04.984431 kernel: NET: Registered PF_PACKET protocol family May 9 00:15:04.984449 kernel: Key type dns_resolver registered May 9 00:15:04.984466 kernel: IPI shorthand broadcast: enabled May 9 00:15:04.984482 kernel: sched_clock: Marking stable (495002450, 145691604)->(752589782, -111895728) May 9 00:15:04.984499 kernel: registered taskstats version 1 May 9 00:15:04.984515 kernel: Loading compiled-in X.509 certificates May 9 00:15:04.984528 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: eadd5f695247828f81e51397e7264f8efd327b51' May 9 00:15:04.984543 kernel: Key type .fscrypt registered May 9 00:15:04.984557 kernel: Key type fscrypt-provisioning registered May 9 00:15:04.984573 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:15:04.984590 kernel: ima: Allocated hash algorithm: sha1 May 9 00:15:04.984606 kernel: ima: No architecture policies found May 9 00:15:04.984624 kernel: clk: Disabling unused clocks May 9 00:15:04.984645 kernel: Freeing unused kernel image (initmem) memory: 43000K May 9 00:15:04.984662 kernel: Write protecting the kernel read-only data: 36864k May 9 00:15:04.984679 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 9 00:15:04.984694 kernel: Run /init as init process May 9 00:15:04.984711 kernel: with arguments: May 9 00:15:04.984729 kernel: /init May 9 00:15:04.984746 kernel: with environment: May 9 00:15:04.984763 kernel: HOME=/ May 9 00:15:04.984783 kernel: TERM=linux May 9 00:15:04.984805 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:15:04.984827 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:15:04.984848 systemd[1]: Detected virtualization amazon. May 9 00:15:04.984866 systemd[1]: Detected architecture x86-64. May 9 00:15:04.984884 systemd[1]: Running in initrd. May 9 00:15:04.984902 systemd[1]: No hostname configured, using default hostname. May 9 00:15:04.984923 systemd[1]: Hostname set to . May 9 00:15:04.984942 systemd[1]: Initializing machine ID from VM UUID. May 9 00:15:04.984960 systemd[1]: Queued start job for default target initrd.target. May 9 00:15:04.984978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:15:04.984996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:15:04.985016 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:15:04.985037 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:15:04.985056 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:15:04.985072 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:15:04.985093 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:15:04.985111 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:15:04.985130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:15:04.985148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:15:04.986205 systemd[1]: Reached target paths.target - Path Units. May 9 00:15:04.986231 systemd[1]: Reached target slices.target - Slice Units. May 9 00:15:04.986250 systemd[1]: Reached target swap.target - Swaps. May 9 00:15:04.986268 systemd[1]: Reached target timers.target - Timer Units. May 9 00:15:04.986287 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:15:04.986305 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:15:04.986324 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:15:04.986343 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:15:04.986361 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:15:04.986385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:15:04.986403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:15:04.986422 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:15:04.986440 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:15:04.986459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:15:04.986477 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:15:04.986495 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:15:04.986514 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:15:04.986536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:15:04.986553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:15:04.986572 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:15:04.986590 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:15:04.986644 systemd-journald[179]: Collecting audit messages is disabled. May 9 00:15:04.986689 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:15:04.986710 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:15:04.986730 systemd-journald[179]: Journal started May 9 00:15:04.986771 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2793b07ada48082315f78d1504225f) is 4.7M, max 38.2M, 33.4M free. May 9 00:15:04.964554 systemd-modules-load[180]: Inserted module 'overlay' May 9 00:15:04.996188 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:15:04.997005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:15:05.008299 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:15:05.010912 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:15:05.021959 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:15:05.022009 kernel: Bridge firewalling registered May 9 00:15:05.021040 systemd-modules-load[180]: Inserted module 'br_netfilter' May 9 00:15:05.024024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:15:05.027414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:15:05.029070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:15:05.040680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:15:05.045129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:15:05.058610 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:15:05.069824 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:15:05.070938 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:15:05.074066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:15:05.084480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:15:05.089225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:15:05.103927 dracut-cmdline[210]: dracut-dracut-053 May 9 00:15:05.108809 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:15:05.127882 systemd-resolved[213]: Positive Trust Anchors: May 9 00:15:05.127901 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:15:05.127965 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:15:05.137925 systemd-resolved[213]: Defaulting to hostname 'linux'. May 9 00:15:05.139291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:15:05.140956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:15:05.194200 kernel: SCSI subsystem initialized May 9 00:15:05.204200 kernel: Loading iSCSI transport class v2.0-870. May 9 00:15:05.216199 kernel: iscsi: registered transport (tcp) May 9 00:15:05.238468 kernel: iscsi: registered transport (qla4xxx) May 9 00:15:05.238555 kernel: QLogic iSCSI HBA Driver May 9 00:15:05.277102 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:15:05.281398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:15:05.308358 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:15:05.308438 kernel: device-mapper: uevent: version 1.0.3 May 9 00:15:05.309407 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:15:05.352200 kernel: raid6: avx512x4 gen() 18087 MB/s May 9 00:15:05.370194 kernel: raid6: avx512x2 gen() 17689 MB/s May 9 00:15:05.388194 kernel: raid6: avx512x1 gen() 17878 MB/s May 9 00:15:05.406193 kernel: raid6: avx2x4 gen() 17772 MB/s May 9 00:15:05.424190 kernel: raid6: avx2x2 gen() 17945 MB/s May 9 00:15:05.442338 kernel: raid6: avx2x1 gen() 13866 MB/s May 9 00:15:05.442394 kernel: raid6: using algorithm avx512x4 gen() 18087 MB/s May 9 00:15:05.461528 kernel: raid6: .... xor() 7784 MB/s, rmw enabled May 9 00:15:05.461594 kernel: raid6: using avx512x2 recovery algorithm May 9 00:15:05.483204 kernel: xor: automatically using best checksumming function avx May 9 00:15:05.646200 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:15:05.656705 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:15:05.662376 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:15:05.677581 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 9 00:15:05.682597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:15:05.692660 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:15:05.709627 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 9 00:15:05.738584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:15:05.743408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:15:05.794970 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:15:05.803474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:15:05.834986 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:15:05.838741 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:15:05.840461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:15:05.841161 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:15:05.850613 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:15:05.879118 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:15:05.902547 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 00:15:05.902826 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 00:15:05.914034 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:15:05.914102 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 9 00:15:05.924216 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:9a:4e:4d:0b:11 May 9 00:15:05.932818 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. May 9 00:15:05.936495 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:15:05.936560 kernel: AES CTR mode by8 optimization enabled May 9 00:15:05.938592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:15:05.938918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:15:05.942046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:15:05.942599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:15:05.943965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:15:05.949224 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:15:05.957739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:15:05.962223 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 00:15:05.962458 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 9 00:15:05.971189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:15:05.972022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:15:05.981499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:15:05.985184 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 00:15:05.994294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:15:05.994366 kernel: GPT:9289727 != 16777215 May 9 00:15:05.994386 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:15:05.994406 kernel: GPT:9289727 != 16777215 May 9 00:15:05.994425 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:15:05.994444 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:15:06.009345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:15:06.018398 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:15:06.034871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:15:06.091979 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (454) May 9 00:15:06.101222 kernel: BTRFS: device fsid cea98156-267a-4592-a459-5921031522cf devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (453) May 9 00:15:06.168128 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 00:15:06.184894 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 00:15:06.196333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 00:15:06.202307 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 00:15:06.202895 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 00:15:06.215456 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:15:06.223394 disk-uuid[630]: Primary Header is updated. May 9 00:15:06.223394 disk-uuid[630]: Secondary Entries is updated. May 9 00:15:06.223394 disk-uuid[630]: Secondary Header is updated. May 9 00:15:06.229324 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:15:07.247189 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:15:07.247432 disk-uuid[631]: The operation has completed successfully. May 9 00:15:07.379960 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:15:07.380086 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:15:07.401475 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:15:07.407293 sh[891]: Success May 9 00:15:07.432202 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 9 00:15:07.542709 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:15:07.550444 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:15:07.554476 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:15:07.593314 kernel: BTRFS info (device dm-0): first mount of filesystem cea98156-267a-4592-a459-5921031522cf May 9 00:15:07.593385 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:15:07.596480 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:15:07.596552 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:15:07.597856 kernel: BTRFS info (device dm-0): using free space tree May 9 00:15:07.669203 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 00:15:07.692524 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:15:07.693927 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:15:07.700541 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:15:07.703321 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:15:07.730653 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:15:07.730726 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 9 00:15:07.732591 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 00:15:07.741772 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 00:15:07.755519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:15:07.755067 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:15:07.762729 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:15:07.773453 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:15:07.809575 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:15:07.816467 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:15:07.851532 systemd-networkd[1083]: lo: Link UP May 9 00:15:07.851546 systemd-networkd[1083]: lo: Gained carrier May 9 00:15:07.855391 systemd-networkd[1083]: Enumeration completed May 9 00:15:07.855536 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:15:07.856512 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:15:07.856517 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:15:07.859680 systemd[1]: Reached target network.target - Network. May 9 00:15:07.861409 systemd-networkd[1083]: eth0: Link UP May 9 00:15:07.861418 systemd-networkd[1083]: eth0: Gained carrier May 9 00:15:07.861435 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:15:07.875282 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.22.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 00:15:08.070412 ignition[1026]: Ignition 2.20.0 May 9 00:15:08.070423 ignition[1026]: Stage: fetch-offline May 9 00:15:08.070598 ignition[1026]: no configs at "/usr/lib/ignition/base.d" May 9 00:15:08.071876 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:15:08.070606 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:08.070831 ignition[1026]: Ignition finished successfully May 9 00:15:08.082483 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 00:15:08.096200 ignition[1093]: Ignition 2.20.0 May 9 00:15:08.096214 ignition[1093]: Stage: fetch May 9 00:15:08.096787 ignition[1093]: no configs at "/usr/lib/ignition/base.d" May 9 00:15:08.096800 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:08.096920 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:08.122767 ignition[1093]: PUT result: OK May 9 00:15:08.125338 ignition[1093]: parsed url from cmdline: "" May 9 00:15:08.125347 ignition[1093]: no config URL provided May 9 00:15:08.125399 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:15:08.125424 ignition[1093]: no config at "/usr/lib/ignition/user.ign" May 9 00:15:08.125452 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:08.130136 ignition[1093]: PUT result: OK May 9 00:15:08.130233 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 00:15:08.131426 ignition[1093]: GET result: OK May 9 00:15:08.131614 ignition[1093]: parsing config with SHA512: 6819cdbce7118d7edda5667b8d742537b7ca5a018563213db9502812994f3b140470e3c2ec6ebc247b1a8d3bbc00bdf1dc2c4a6b7fe61d036549a0cb8c075e2d May 9 00:15:08.136823 unknown[1093]: fetched base config from "system" May 9 00:15:08.136838 unknown[1093]: fetched base config from "system" May 9 00:15:08.138198 ignition[1093]: fetch: fetch complete May 9 00:15:08.136847 unknown[1093]: fetched user config from "aws" May 9 00:15:08.138212 ignition[1093]: fetch: fetch passed May 9 00:15:08.140978 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 00:15:08.138279 ignition[1093]: Ignition finished successfully May 9 00:15:08.147409 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:15:08.162978 ignition[1099]: Ignition 2.20.0 May 9 00:15:08.162992 ignition[1099]: Stage: kargs May 9 00:15:08.163440 ignition[1099]: no configs at "/usr/lib/ignition/base.d" May 9 00:15:08.163456 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:08.163591 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:08.164517 ignition[1099]: PUT result: OK May 9 00:15:08.167628 ignition[1099]: kargs: kargs passed May 9 00:15:08.167705 ignition[1099]: Ignition finished successfully May 9 00:15:08.169557 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:15:08.174386 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:15:08.189919 ignition[1105]: Ignition 2.20.0 May 9 00:15:08.189933 ignition[1105]: Stage: disks May 9 00:15:08.190390 ignition[1105]: no configs at "/usr/lib/ignition/base.d" May 9 00:15:08.190404 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:08.190524 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:08.191528 ignition[1105]: PUT result: OK May 9 00:15:08.194308 ignition[1105]: disks: disks passed May 9 00:15:08.194382 ignition[1105]: Ignition finished successfully May 9 00:15:08.195608 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:15:08.196757 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:15:08.197145 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:15:08.197712 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:15:08.198263 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:15:08.198825 systemd[1]: Reached target basic.target - Basic System. May 9 00:15:08.210467 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:15:08.251113 systemd-fsck[1114]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:15:08.254145 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:15:08.257340 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:15:08.366192 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 61492938-2ced-4ec2-b593-fc96fa0fefcc r/w with ordered data mode. Quota mode: none. May 9 00:15:08.367244 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:15:08.368492 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:15:08.374348 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:15:08.377293 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:15:08.378600 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:15:08.378648 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:15:08.378674 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:15:08.385247 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:15:08.390389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:15:08.396213 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1133) May 9 00:15:08.400850 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:15:08.400920 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 9 00:15:08.400942 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 00:15:08.408202 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 00:15:08.410120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:15:08.705653 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:15:08.711587 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory May 9 00:15:08.716654 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:15:08.721550 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:15:09.019821 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:15:09.023314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:15:09.029451 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:15:09.039384 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:15:09.041435 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:15:09.074145 ignition[1251]: INFO : Ignition 2.20.0 May 9 00:15:09.074145 ignition[1251]: INFO : Stage: mount May 9 00:15:09.074145 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:15:09.074145 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:09.074145 ignition[1251]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:09.077876 ignition[1251]: INFO : PUT result: OK May 9 00:15:09.074808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:15:09.079593 ignition[1251]: INFO : mount: mount passed May 9 00:15:09.080104 ignition[1251]: INFO : Ignition finished successfully May 9 00:15:09.080962 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:15:09.086336 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:15:09.104548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:15:09.123210 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1263) May 9 00:15:09.126939 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:15:09.127002 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 9 00:15:09.127016 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 00:15:09.133187 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 00:15:09.135695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:15:09.154154 ignition[1279]: INFO : Ignition 2.20.0 May 9 00:15:09.154154 ignition[1279]: INFO : Stage: files May 9 00:15:09.155191 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:15:09.155191 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:09.155191 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:09.156071 ignition[1279]: INFO : PUT result: OK May 9 00:15:09.158081 ignition[1279]: DEBUG : files: compiled without relabeling support, skipping May 9 00:15:09.159966 ignition[1279]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:15:09.159966 ignition[1279]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:15:09.182136 ignition[1279]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:15:09.182866 ignition[1279]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:15:09.182866 ignition[1279]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:15:09.182635 unknown[1279]: wrote ssh authorized keys file for user: core May 9 00:15:09.184623 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 9 00:15:09.184623 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 9 00:15:09.293881 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:15:09.591135 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 9 00:15:09.591135 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:15:09.592895 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:15:09.723371 systemd-networkd[1083]: eth0: Gained IPv6LL May 9 00:15:10.047001 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:15:10.196395 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:15:10.197567 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 9 00:15:10.613549 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:15:11.158846 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:15:11.158846 ignition[1279]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 00:15:11.161750 ignition[1279]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:15:11.162612 ignition[1279]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:15:11.162612 ignition[1279]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 00:15:11.162612 ignition[1279]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 9 00:15:11.162612 ignition[1279]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:15:11.162612 ignition[1279]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:15:11.162612 ignition[1279]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:15:11.162612 ignition[1279]: INFO : files: files passed May 9 00:15:11.162612 ignition[1279]: INFO : Ignition finished successfully May 9 00:15:11.163621 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:15:11.175417 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:15:11.177867 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:15:11.180984 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:15:11.181093 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:15:11.195068 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:15:11.195068 initrd-setup-root-after-ignition[1309]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:15:11.199239 initrd-setup-root-after-ignition[1313]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:15:11.199484 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:15:11.201652 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:15:11.207421 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:15:11.241939 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:15:11.242074 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:15:11.243325 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:15:11.244609 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:15:11.245424 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:15:11.247195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:15:11.273673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:15:11.280486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:15:11.291904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:15:11.292805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:15:11.293827 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:15:11.294686 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:15:11.294907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:15:11.296075 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:15:11.297075 systemd[1]: Stopped target basic.target - Basic System. May 9 00:15:11.297889 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:15:11.298639 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:15:11.299399 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:15:11.300183 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:15:11.301064 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:15:11.301861 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:15:11.302998 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:15:11.303760 systemd[1]: Stopped target swap.target - Swaps. May 9 00:15:11.304581 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:15:11.304765 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:15:11.305844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:15:11.306648 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:15:11.307339 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:15:11.307488 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:15:11.308100 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:15:11.308398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:15:11.309794 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:15:11.309981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:15:11.310715 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:15:11.310867 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:15:11.317514 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:15:11.318214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:15:11.319074 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:15:11.322605 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:15:11.323949 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:15:11.324797 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:15:11.326619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:15:11.327339 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:15:11.334971 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:15:11.336274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:15:11.343182 ignition[1333]: INFO : Ignition 2.20.0 May 9 00:15:11.343182 ignition[1333]: INFO : Stage: umount May 9 00:15:11.343182 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:15:11.343182 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:15:11.343182 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:15:11.346700 ignition[1333]: INFO : PUT result: OK May 9 00:15:11.350679 ignition[1333]: INFO : umount: umount passed May 9 00:15:11.351366 ignition[1333]: INFO : Ignition finished successfully May 9 00:15:11.353119 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:15:11.353921 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:15:11.355310 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:15:11.355372 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:15:11.355929 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:15:11.355990 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:15:11.358640 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 00:15:11.358708 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 00:15:11.359234 systemd[1]: Stopped target network.target - Network. May 9 00:15:11.359669 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:15:11.359731 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:15:11.360236 systemd[1]: Stopped target paths.target - Path Units. May 9 00:15:11.361733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:15:11.365236 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:15:11.365638 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:15:11.366543 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:15:11.367753 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:15:11.367812 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:15:11.369714 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:15:11.369784 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:15:11.370283 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:15:11.370354 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:15:11.370835 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:15:11.370896 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:15:11.371493 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:15:11.372086 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:15:11.374284 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:15:11.375059 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:15:11.375209 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:15:11.376939 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:15:11.377010 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:15:11.377234 systemd-networkd[1083]: eth0: DHCPv6 lease lost May 9 00:15:11.378655 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:15:11.378803 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:15:11.381535 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:15:11.381689 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:15:11.385097 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:15:11.385195 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:15:11.390323 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:15:11.390914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:15:11.391003 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:15:11.393397 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:15:11.393473 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:15:11.394468 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:15:11.394533 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:15:11.395141 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:15:11.395215 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:15:11.395926 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:15:11.409779 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:15:11.409927 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:15:11.411539 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:15:11.411735 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:15:11.413690 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:15:11.413773 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:15:11.414710 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:15:11.414761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:15:11.415449 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:15:11.415516 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:15:11.416713 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:15:11.416778 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:15:11.417867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:15:11.417932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:15:11.424553 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:15:11.425241 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:15:11.425330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:15:11.425983 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:15:11.426045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:15:11.433413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:15:11.433569 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:15:11.434668 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:15:11.442462 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:15:11.451543 systemd[1]: Switching root. May 9 00:15:11.481874 systemd-journald[179]: Journal stopped May 9 00:15:13.232052 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). May 9 00:15:13.237118 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:15:13.238212 kernel: SELinux: policy capability open_perms=1 May 9 00:15:13.238264 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:15:13.238289 kernel: SELinux: policy capability always_check_network=0 May 9 00:15:13.238311 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:15:13.238334 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:15:13.238354 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:15:13.238375 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:15:13.238398 kernel: audit: type=1403 audit(1746749711.903:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:15:13.238431 systemd[1]: Successfully loaded SELinux policy in 54.594ms. May 9 00:15:13.238479 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.466ms. May 9 00:15:13.238504 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:15:13.238530 systemd[1]: Detected virtualization amazon. May 9 00:15:13.238554 systemd[1]: Detected architecture x86-64. May 9 00:15:13.238579 systemd[1]: Detected first boot. May 9 00:15:13.238603 systemd[1]: Initializing machine ID from VM UUID. May 9 00:15:13.238628 zram_generator::config[1375]: No configuration found. May 9 00:15:13.238659 systemd[1]: Populated /etc with preset unit settings. May 9 00:15:13.238683 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:15:13.238707 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:15:13.238733 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:15:13.238759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:15:13.238784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:15:13.238809 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:15:13.238832 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:15:13.238858 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:15:13.238887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:15:13.238919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:15:13.238944 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:15:13.238969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:15:13.238994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:15:13.239018 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:15:13.239042 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:15:13.239075 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:15:13.239104 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:15:13.239127 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:15:13.239151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:15:13.241284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:15:13.241325 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:15:13.241352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:15:13.241377 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:15:13.241403 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:15:13.241437 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:15:13.241461 systemd[1]: Reached target slices.target - Slice Units. May 9 00:15:13.241486 systemd[1]: Reached target swap.target - Swaps. May 9 00:15:13.241512 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:15:13.241536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:15:13.241563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:15:13.241587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:15:13.241612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:15:13.241636 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:15:13.241660 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:15:13.241690 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:15:13.241713 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:15:13.241738 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:15:13.241763 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:15:13.241789 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:15:13.241813 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:15:13.241840 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:15:13.241865 systemd[1]: Reached target machines.target - Containers. May 9 00:15:13.241893 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:15:13.241918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:15:13.241942 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:15:13.241967 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:15:13.241991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:15:13.242024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:15:13.242048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:15:13.242073 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:15:13.242101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:15:13.242125 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:15:13.242149 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:15:13.246270 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:15:13.246320 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:15:13.246345 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:15:13.246371 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:15:13.246396 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:15:13.246423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:15:13.246459 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:15:13.246484 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:15:13.246509 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:15:13.246535 systemd[1]: Stopped verity-setup.service. May 9 00:15:13.246563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:15:13.246588 kernel: loop: module loaded May 9 00:15:13.246615 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:15:13.246642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:15:13.246664 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:15:13.246695 kernel: fuse: init (API version 7.39) May 9 00:15:13.246719 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:15:13.246746 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:15:13.246771 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:15:13.246796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:15:13.246823 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:15:13.246849 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:15:13.246874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:15:13.246899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:15:13.246925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:15:13.246951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:15:13.246976 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:15:13.247005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:15:13.247029 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:15:13.247055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:15:13.247082 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:15:13.247105 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:15:13.254253 systemd-journald[1460]: Collecting audit messages is disabled. May 9 00:15:13.254355 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:15:13.254386 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:15:13.254411 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:15:13.254436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:15:13.254460 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:15:13.254486 systemd-journald[1460]: Journal started May 9 00:15:13.254537 systemd-journald[1460]: Runtime Journal (/run/log/journal/ec2793b07ada48082315f78d1504225f) is 4.7M, max 38.2M, 33.4M free. May 9 00:15:12.851914 systemd[1]: Queued start job for default target multi-user.target. May 9 00:15:12.913663 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 00:15:12.914069 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:15:13.265448 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:15:13.265516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:15:13.286836 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:15:13.288402 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:15:13.315384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:15:13.320205 kernel: ACPI: bus type drm_connector registered May 9 00:15:13.327803 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:15:13.327883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:15:13.333365 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:15:13.333436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:15:13.346199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:15:13.356220 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:15:13.372959 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:15:13.367199 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:15:13.370696 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:15:13.370879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:15:13.371979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:15:13.373629 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:15:13.376528 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:15:13.377504 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:15:13.395909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:15:13.405463 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:15:13.414501 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:15:13.417439 kernel: loop0: detected capacity change from 0 to 138184 May 9 00:15:13.423345 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:15:13.432388 systemd-journald[1460]: Time spent on flushing to /var/log/journal/ec2793b07ada48082315f78d1504225f is 99.654ms for 1001 entries. May 9 00:15:13.432388 systemd-journald[1460]: System Journal (/var/log/journal/ec2793b07ada48082315f78d1504225f) is 8.0M, max 195.6M, 187.6M free. May 9 00:15:13.555997 systemd-journald[1460]: Received client request to flush runtime journal. May 9 00:15:13.556097 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:15:13.433889 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:15:13.436351 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:15:13.438239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:15:13.496264 udevadm[1515]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:15:13.560697 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:15:13.564574 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:15:13.567098 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:15:13.568286 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:15:13.571844 kernel: loop1: detected capacity change from 0 to 140992 May 9 00:15:13.580387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:15:13.617497 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. May 9 00:15:13.617852 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. May 9 00:15:13.623579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:15:13.667198 kernel: loop2: detected capacity change from 0 to 218376 May 9 00:15:13.798204 kernel: loop3: detected capacity change from 0 to 62848 May 9 00:15:13.905240 kernel: loop4: detected capacity change from 0 to 138184 May 9 00:15:13.949949 kernel: loop5: detected capacity change from 0 to 140992 May 9 00:15:13.982234 kernel: loop6: detected capacity change from 0 to 218376 May 9 00:15:14.021496 kernel: loop7: detected capacity change from 0 to 62848 May 9 00:15:14.030199 (sd-merge)[1530]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 00:15:14.030885 (sd-merge)[1530]: Merged extensions into '/usr'. May 9 00:15:14.040188 systemd[1]: Reloading requested from client PID 1486 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:15:14.040373 systemd[1]: Reloading... May 9 00:15:14.152331 zram_generator::config[1552]: No configuration found. May 9 00:15:14.371431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:15:14.471993 systemd[1]: Reloading finished in 430 ms. May 9 00:15:14.512742 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:15:14.513682 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:15:14.522375 systemd[1]: Starting ensure-sysext.service... May 9 00:15:14.524799 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:15:14.528366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:15:14.550360 systemd[1]: Reloading requested from client PID 1608 ('systemctl') (unit ensure-sysext.service)... May 9 00:15:14.550521 systemd[1]: Reloading... May 9 00:15:14.579791 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:15:14.582744 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:15:14.586938 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:15:14.588739 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. May 9 00:15:14.588837 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. May 9 00:15:14.593811 systemd-udevd[1610]: Using default interface naming scheme 'v255'. May 9 00:15:14.608863 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:15:14.608884 systemd-tmpfiles[1609]: Skipping /boot May 9 00:15:14.632930 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:15:14.632946 systemd-tmpfiles[1609]: Skipping /boot May 9 00:15:14.720212 zram_generator::config[1642]: No configuration found. May 9 00:15:14.773784 (udev-worker)[1650]: Network interface NamePolicy= disabled on kernel command line. May 9 00:15:14.947205 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:15:14.961262 kernel: ACPI: button: Power Button [PWRF] May 9 00:15:14.964211 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 9 00:15:14.966190 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 May 9 00:15:14.966280 kernel: ACPI: button: Sleep Button [SLPF] May 9 00:15:14.981196 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 9 00:15:15.044187 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:15:15.063589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:15:15.074816 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1646) May 9 00:15:15.074921 ldconfig[1482]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:15:15.224722 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:15:15.224810 systemd[1]: Reloading finished in 673 ms. May 9 00:15:15.242127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:15:15.242920 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:15:15.247737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:15:15.270808 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:15:15.280087 systemd[1]: Finished ensure-sysext.service. May 9 00:15:15.300110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 00:15:15.300859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:15:15.305369 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:15:15.311396 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:15:15.314227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:15:15.316901 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:15:15.320003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:15:15.329425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:15:15.338437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:15:15.342415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:15:15.343780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:15:15.346386 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:15:15.362408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:15:15.374686 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:15:15.375645 lvm[1805]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:15:15.384469 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:15:15.385554 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:15:15.397472 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:15:15.417321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:15:15.417989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:15:15.421023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:15:15.422420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:15:15.423676 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:15:15.425250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:15:15.426683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:15:15.427279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:15:15.429464 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:15:15.429659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:15:15.431038 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:15:15.436229 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:15:15.456825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:15:15.467622 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:15:15.468783 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:15:15.469007 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:15:15.479724 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:15:15.480213 augenrules[1844]: No rules May 9 00:15:15.483560 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:15:15.483819 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:15:15.494535 lvm[1840]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:15:15.499313 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:15:15.519407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:15:15.529248 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:15:15.530663 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:15:15.560327 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:15:15.561100 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:15:15.573232 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:15:15.575259 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:15:15.635704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:15:15.669043 systemd-networkd[1817]: lo: Link UP May 9 00:15:15.669061 systemd-networkd[1817]: lo: Gained carrier May 9 00:15:15.670977 systemd-networkd[1817]: Enumeration completed May 9 00:15:15.671497 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:15:15.671510 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:15:15.672608 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:15:15.674468 systemd-resolved[1819]: Positive Trust Anchors: May 9 00:15:15.674801 systemd-resolved[1819]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:15:15.674857 systemd-resolved[1819]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:15:15.678375 systemd-networkd[1817]: eth0: Link UP May 9 00:15:15.678611 systemd-networkd[1817]: eth0: Gained carrier May 9 00:15:15.678640 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:15:15.680905 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:15:15.694307 systemd-resolved[1819]: Defaulting to hostname 'linux'. May 9 00:15:15.696113 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.22.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 00:15:15.696489 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:15:15.696996 systemd[1]: Reached target network.target - Network. May 9 00:15:15.697862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:15:15.698276 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:15:15.698745 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:15:15.699119 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:15:15.699641 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:15:15.700049 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:15:15.700416 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:15:15.700739 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:15:15.700767 systemd[1]: Reached target paths.target - Path Units. May 9 00:15:15.701061 systemd[1]: Reached target timers.target - Timer Units. May 9 00:15:15.703614 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:15:15.705567 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:15:15.712391 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:15:15.713549 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:15:15.714085 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:15:15.714518 systemd[1]: Reached target basic.target - Basic System. May 9 00:15:15.714933 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:15:15.714975 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:15:15.716107 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:15:15.720373 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 00:15:15.724720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:15:15.734329 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:15:15.737367 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:15:15.740266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:15:15.743414 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:15:15.748272 jq[1873]: false May 9 00:15:15.752526 systemd[1]: Started ntpd.service - Network Time Service. May 9 00:15:15.762228 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:15:15.768873 systemd[1]: Starting setup-oem.service - Setup OEM... May 9 00:15:15.778828 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:15:15.783398 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:15:15.816369 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:15:15.817536 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:15:15.818211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:15:15.822449 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:15:15.826309 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:15:15.830645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:15:15.830874 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:15:15.835861 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:15:15.836091 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:15:15.869293 extend-filesystems[1874]: Found loop4 May 9 00:15:15.870877 extend-filesystems[1874]: Found loop5 May 9 00:15:15.872251 extend-filesystems[1874]: Found loop6 May 9 00:15:15.872251 extend-filesystems[1874]: Found loop7 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p1 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p2 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p3 May 9 00:15:15.872251 extend-filesystems[1874]: Found usr May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p4 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p6 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p7 May 9 00:15:15.872251 extend-filesystems[1874]: Found nvme0n1p9 May 9 00:15:15.872251 extend-filesystems[1874]: Checking size of /dev/nvme0n1p9 May 9 00:15:15.887029 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:15:15.887696 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:15:15.897028 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:15:15.904122 dbus-daemon[1872]: [system] SELinux support is enabled May 9 00:15:15.906445 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:15:15.914270 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:15:15.914327 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:15:15.914857 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:15:15.914881 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:15:15.925885 jq[1889]: true May 9 00:15:15.952580 update_engine[1887]: I20250509 00:15:15.949134 1887 main.cc:92] Flatcar Update Engine starting May 9 00:15:15.952486 dbus-daemon[1872]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 9 00:15:15.961530 extend-filesystems[1874]: Resized partition /dev/nvme0n1p9 May 9 00:15:15.973022 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 9 00:15:15.968281 ntpd[1876]: ntpd 4.2.8p17@1.4004-o Thu May 8 21:41:51 UTC 2025 (1): Starting May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: ntpd 4.2.8p17@1.4004-o Thu May 8 21:41:51 UTC 2025 (1): Starting May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: ---------------------------------------------------- May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: ntp-4 is maintained by Network Time Foundation, May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: corporation. Support and training for ntp-4 are May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: available at https://www.nwtime.org/support May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: ---------------------------------------------------- May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: proto: precision = 0.092 usec (-23) May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: basedate set to 2025-04-26 May 9 00:15:15.974768 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: gps base set to 2025-04-27 (week 2364) May 9 00:15:15.968307 ntpd[1876]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Listen and drop on 0 v6wildcard [::]:123 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Listen normally on 2 lo 127.0.0.1:123 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Listen normally on 3 eth0 172.31.22.98:123 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Listen normally on 4 lo [::1]:123 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: bind(21) AF_INET6 fe80::49a:4eff:fe4d:b11%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: unable to create socket on eth0 (5) for fe80::49a:4eff:fe4d:b11%2#123 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: failed to init interface for address fe80::49a:4eff:fe4d:b11%2 May 9 00:15:15.998470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: Listening on routing socket on fd #21 for interface updates May 9 00:15:15.998880 update_engine[1887]: I20250509 00:15:15.987478 1887 update_check_scheduler.cc:74] Next update check in 10m14s May 9 00:15:15.998928 extend-filesystems[1921]: resize2fs 1.47.1 (20-May-2024) May 9 00:15:15.984631 systemd[1]: Started update-engine.service - Update Engine. May 9 00:15:16.013342 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 9 00:15:15.968319 ntpd[1876]: ---------------------------------------------------- May 9 00:15:16.013470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:15:16.013470 ntpd[1876]: 9 May 00:15:15 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:15:15.996540 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:15:16.013675 tar[1895]: linux-amd64/LICENSE May 9 00:15:16.013675 tar[1895]: linux-amd64/helm May 9 00:15:15.968331 ntpd[1876]: ntp-4 is maintained by Network Time Foundation, May 9 00:15:15.997495 systemd[1]: Finished setup-oem.service - Setup OEM. May 9 00:15:15.968340 ntpd[1876]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 00:15:15.968350 ntpd[1876]: corporation. Support and training for ntp-4 are May 9 00:15:15.968360 ntpd[1876]: available at https://www.nwtime.org/support May 9 00:15:15.968370 ntpd[1876]: ---------------------------------------------------- May 9 00:15:15.972603 ntpd[1876]: proto: precision = 0.092 usec (-23) May 9 00:15:15.972927 ntpd[1876]: basedate set to 2025-04-26 May 9 00:15:15.972943 ntpd[1876]: gps base set to 2025-04-27 (week 2364) May 9 00:15:15.977681 ntpd[1876]: Listen and drop on 0 v6wildcard [::]:123 May 9 00:15:15.977733 ntpd[1876]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 00:15:15.978458 ntpd[1876]: Listen normally on 2 lo 127.0.0.1:123 May 9 00:15:15.978502 ntpd[1876]: Listen normally on 3 eth0 172.31.22.98:123 May 9 00:15:15.978549 ntpd[1876]: Listen normally on 4 lo [::1]:123 May 9 00:15:15.978598 ntpd[1876]: bind(21) AF_INET6 fe80::49a:4eff:fe4d:b11%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:15:15.978621 ntpd[1876]: unable to create socket on eth0 (5) for fe80::49a:4eff:fe4d:b11%2#123 May 9 00:15:15.978638 ntpd[1876]: failed to init interface for address fe80::49a:4eff:fe4d:b11%2 May 9 00:15:15.978672 ntpd[1876]: Listening on routing socket on fd #21 for interface updates May 9 00:15:15.998488 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:15:15.998522 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:15:16.044859 jq[1913]: true May 9 00:15:16.110189 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1653) May 9 00:15:16.158833 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 9 00:15:16.175621 extend-filesystems[1921]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 9 00:15:16.175621 extend-filesystems[1921]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:15:16.175621 extend-filesystems[1921]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 9 00:15:16.185752 extend-filesystems[1874]: Resized filesystem in /dev/nvme0n1p9 May 9 00:15:16.180336 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:15:16.180574 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:15:16.337467 coreos-metadata[1871]: May 09 00:15:16.330 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 00:15:16.331553 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:15:16.344621 bash[1977]: Updated "/home/core/.ssh/authorized_keys" May 9 00:15:16.340513 systemd[1]: Starting sshkeys.service... May 9 00:15:16.354542 systemd-logind[1885]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:15:16.355991 coreos-metadata[1871]: May 09 00:15:16.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 9 00:15:16.354575 systemd-logind[1885]: Watching system buttons on /dev/input/event3 (Sleep Button) May 9 00:15:16.354599 systemd-logind[1885]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:15:16.360478 systemd-logind[1885]: New seat seat0. May 9 00:15:16.366466 coreos-metadata[1871]: May 09 00:15:16.363 INFO Fetch successful May 9 00:15:16.366466 coreos-metadata[1871]: May 09 00:15:16.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 9 00:15:16.364583 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:15:16.370266 coreos-metadata[1871]: May 09 00:15:16.370 INFO Fetch successful May 9 00:15:16.371209 coreos-metadata[1871]: May 09 00:15:16.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 9 00:15:16.376583 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.hostname1' May 9 00:15:16.376898 coreos-metadata[1871]: May 09 00:15:16.376 INFO Fetch successful May 9 00:15:16.379791 coreos-metadata[1871]: May 09 00:15:16.377 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 9 00:15:16.379219 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 9 00:15:16.388214 coreos-metadata[1871]: May 09 00:15:16.387 INFO Fetch successful May 9 00:15:16.388214 coreos-metadata[1871]: May 09 00:15:16.387 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 9 00:15:16.387668 dbus-daemon[1872]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1920 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 9 00:15:16.394195 coreos-metadata[1871]: May 09 00:15:16.390 INFO Fetch failed with 404: resource not found May 9 00:15:16.394195 coreos-metadata[1871]: May 09 00:15:16.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 9 00:15:16.394195 coreos-metadata[1871]: May 09 00:15:16.391 INFO Fetch successful May 9 00:15:16.394195 coreos-metadata[1871]: May 09 00:15:16.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 9 00:15:16.397514 systemd[1]: Starting polkit.service - Authorization Manager... May 9 00:15:16.412202 coreos-metadata[1871]: May 09 00:15:16.411 INFO Fetch successful May 9 00:15:16.412202 coreos-metadata[1871]: May 09 00:15:16.411 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 9 00:15:16.417578 coreos-metadata[1871]: May 09 00:15:16.414 INFO Fetch successful May 9 00:15:16.417578 coreos-metadata[1871]: May 09 00:15:16.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 9 00:15:16.419026 coreos-metadata[1871]: May 09 00:15:16.418 INFO Fetch successful May 9 00:15:16.419026 coreos-metadata[1871]: May 09 00:15:16.418 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 9 00:15:16.422698 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 00:15:16.429957 coreos-metadata[1871]: May 09 00:15:16.426 INFO Fetch successful May 9 00:15:16.432715 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 00:15:16.475406 polkitd[2004]: Started polkitd version 121 May 9 00:15:16.510406 polkitd[2004]: Loading rules from directory /etc/polkit-1/rules.d May 9 00:15:16.510496 polkitd[2004]: Loading rules from directory /usr/share/polkit-1/rules.d May 9 00:15:16.522226 polkitd[2004]: Finished loading, compiling and executing 2 rules May 9 00:15:16.526643 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 00:15:16.527926 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:15:16.532150 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 9 00:15:16.532396 systemd[1]: Started polkit.service - Authorization Manager. May 9 00:15:16.535242 polkitd[2004]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 9 00:15:16.565295 systemd-hostnamed[1920]: Hostname set to (transient) May 9 00:15:16.565410 systemd-resolved[1819]: System hostname changed to 'ip-172-31-22-98'. May 9 00:15:16.611495 coreos-metadata[2008]: May 09 00:15:16.610 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 00:15:16.613019 coreos-metadata[2008]: May 09 00:15:16.612 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 9 00:15:16.613848 coreos-metadata[2008]: May 09 00:15:16.613 INFO Fetch successful May 9 00:15:16.613848 coreos-metadata[2008]: May 09 00:15:16.613 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 00:15:16.615427 coreos-metadata[2008]: May 09 00:15:16.614 INFO Fetch successful May 9 00:15:16.616715 unknown[2008]: wrote ssh authorized keys file for user: core May 9 00:15:16.642324 locksmithd[1930]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:15:16.673387 update-ssh-keys[2055]: Updated "/home/core/.ssh/authorized_keys" May 9 00:15:16.671142 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 00:15:16.681776 systemd[1]: Finished sshkeys.service. May 9 00:15:16.922304 containerd[1903]: time="2025-05-09T00:15:16.919841549Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 00:15:16.968777 ntpd[1876]: bind(24) AF_INET6 fe80::49a:4eff:fe4d:b11%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:15:16.968827 ntpd[1876]: unable to create socket on eth0 (6) for fe80::49a:4eff:fe4d:b11%2#123 May 9 00:15:16.969251 ntpd[1876]: 9 May 00:15:16 ntpd[1876]: bind(24) AF_INET6 fe80::49a:4eff:fe4d:b11%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:15:16.969251 ntpd[1876]: 9 May 00:15:16 ntpd[1876]: unable to create socket on eth0 (6) for fe80::49a:4eff:fe4d:b11%2#123 May 9 00:15:16.969251 ntpd[1876]: 9 May 00:15:16 ntpd[1876]: failed to init interface for address fe80::49a:4eff:fe4d:b11%2 May 9 00:15:16.968842 ntpd[1876]: failed to init interface for address fe80::49a:4eff:fe4d:b11%2 May 9 00:15:17.024150 containerd[1903]: time="2025-05-09T00:15:17.024086751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.030286 containerd[1903]: time="2025-05-09T00:15:17.030230909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:15:17.030286 containerd[1903]: time="2025-05-09T00:15:17.030283556Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:15:17.030440 containerd[1903]: time="2025-05-09T00:15:17.030304993Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:15:17.030508 containerd[1903]: time="2025-05-09T00:15:17.030489235Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:15:17.030579 containerd[1903]: time="2025-05-09T00:15:17.030519094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.030618 containerd[1903]: time="2025-05-09T00:15:17.030597016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:15:17.030655 containerd[1903]: time="2025-05-09T00:15:17.030615905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.030897 containerd[1903]: time="2025-05-09T00:15:17.030829686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:15:17.030897 containerd[1903]: time="2025-05-09T00:15:17.030854877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.030897 containerd[1903]: time="2025-05-09T00:15:17.030876728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:15:17.030897 containerd[1903]: time="2025-05-09T00:15:17.030892566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.031379 containerd[1903]: time="2025-05-09T00:15:17.031004850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.031379 containerd[1903]: time="2025-05-09T00:15:17.031286278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:15:17.031858 containerd[1903]: time="2025-05-09T00:15:17.031447241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:15:17.031858 containerd[1903]: time="2025-05-09T00:15:17.031468476Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:15:17.031858 containerd[1903]: time="2025-05-09T00:15:17.031565542Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:15:17.031858 containerd[1903]: time="2025-05-09T00:15:17.031623516Z" level=info msg="metadata content store policy set" policy=shared May 9 00:15:17.042927 containerd[1903]: time="2025-05-09T00:15:17.042750883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:15:17.042927 containerd[1903]: time="2025-05-09T00:15:17.042827860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:15:17.042927 containerd[1903]: time="2025-05-09T00:15:17.042852134Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:15:17.042927 containerd[1903]: time="2025-05-09T00:15:17.042875601Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:15:17.042927 containerd[1903]: time="2025-05-09T00:15:17.042895677Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:15:17.043200 containerd[1903]: time="2025-05-09T00:15:17.043083624Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:15:17.045135 containerd[1903]: time="2025-05-09T00:15:17.043498732Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:15:17.045241 containerd[1903]: time="2025-05-09T00:15:17.045211003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:15:17.045300 containerd[1903]: time="2025-05-09T00:15:17.045242230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:15:17.045300 containerd[1903]: time="2025-05-09T00:15:17.045283283Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:15:17.045374 containerd[1903]: time="2025-05-09T00:15:17.045309563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045374 containerd[1903]: time="2025-05-09T00:15:17.045346105Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045374 containerd[1903]: time="2025-05-09T00:15:17.045366056Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045476 containerd[1903]: time="2025-05-09T00:15:17.045387088Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045476 containerd[1903]: time="2025-05-09T00:15:17.045424359Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045476 containerd[1903]: time="2025-05-09T00:15:17.045444533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045476 containerd[1903]: time="2025-05-09T00:15:17.045463824Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045619 containerd[1903]: time="2025-05-09T00:15:17.045498000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:15:17.045619 containerd[1903]: time="2025-05-09T00:15:17.045538246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045619 containerd[1903]: time="2025-05-09T00:15:17.045573945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045619 containerd[1903]: time="2025-05-09T00:15:17.045595274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045619 containerd[1903]: time="2025-05-09T00:15:17.045615346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045806 containerd[1903]: time="2025-05-09T00:15:17.045634883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045806 containerd[1903]: time="2025-05-09T00:15:17.045671506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045806 containerd[1903]: time="2025-05-09T00:15:17.045689188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045806 containerd[1903]: time="2025-05-09T00:15:17.045709944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045806 containerd[1903]: time="2025-05-09T00:15:17.045746230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:15:17.045806 containerd[1903]: time="2025-05-09T00:15:17.045769905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:15:17.046012 containerd[1903]: time="2025-05-09T00:15:17.045788254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:15:17.046012 containerd[1903]: time="2025-05-09T00:15:17.045914204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:15:17.046012 containerd[1903]: time="2025-05-09T00:15:17.045936535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:15:17.046012 containerd[1903]: time="2025-05-09T00:15:17.045981127Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:15:17.046156 containerd[1903]: time="2025-05-09T00:15:17.046019123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:15:17.046156 containerd[1903]: time="2025-05-09T00:15:17.046055355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:15:17.046156 containerd[1903]: time="2025-05-09T00:15:17.046073548Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:15:17.046475 containerd[1903]: time="2025-05-09T00:15:17.046446021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:15:17.048037 containerd[1903]: time="2025-05-09T00:15:17.048009492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:15:17.052103 containerd[1903]: time="2025-05-09T00:15:17.050189619Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:15:17.052103 containerd[1903]: time="2025-05-09T00:15:17.050228682Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:15:17.052103 containerd[1903]: time="2025-05-09T00:15:17.050244016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:15:17.052103 containerd[1903]: time="2025-05-09T00:15:17.050268225Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:15:17.052103 containerd[1903]: time="2025-05-09T00:15:17.050282717Z" level=info msg="NRI interface is disabled by configuration." May 9 00:15:17.052103 containerd[1903]: time="2025-05-09T00:15:17.050299124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:15:17.052405 containerd[1903]: time="2025-05-09T00:15:17.050797631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:15:17.052405 containerd[1903]: time="2025-05-09T00:15:17.050868832Z" level=info msg="Connect containerd service" May 9 00:15:17.052405 containerd[1903]: time="2025-05-09T00:15:17.050921960Z" level=info msg="using legacy CRI server" May 9 00:15:17.052405 containerd[1903]: time="2025-05-09T00:15:17.050931718Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:15:17.052405 containerd[1903]: time="2025-05-09T00:15:17.051111875Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:15:17.052405 containerd[1903]: time="2025-05-09T00:15:17.051865072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:15:17.052858 containerd[1903]: time="2025-05-09T00:15:17.052817214Z" level=info msg="Start subscribing containerd event" May 9 00:15:17.052939 containerd[1903]: time="2025-05-09T00:15:17.052926502Z" level=info msg="Start recovering state" May 9 00:15:17.053070 containerd[1903]: time="2025-05-09T00:15:17.053058643Z" level=info msg="Start event monitor" May 9 00:15:17.053139 containerd[1903]: time="2025-05-09T00:15:17.053128729Z" level=info msg="Start snapshots syncer" May 9 00:15:17.053219 containerd[1903]: time="2025-05-09T00:15:17.053206961Z" level=info msg="Start cni network conf syncer for default" May 9 00:15:17.053279 containerd[1903]: time="2025-05-09T00:15:17.053269287Z" level=info msg="Start streaming server" May 9 00:15:17.057203 containerd[1903]: time="2025-05-09T00:15:17.056228317Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:15:17.057203 containerd[1903]: time="2025-05-09T00:15:17.056387144Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:15:17.056585 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:15:17.057886 containerd[1903]: time="2025-05-09T00:15:17.057863218Z" level=info msg="containerd successfully booted in 0.141230s" May 9 00:15:17.083347 systemd-networkd[1817]: eth0: Gained IPv6LL May 9 00:15:17.087763 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:15:17.089052 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:15:17.102550 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 9 00:15:17.105854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:17.116524 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:15:17.197538 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:15:17.208139 sshd_keygen[1905]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:15:17.247816 amazon-ssm-agent[2076]: Initializing new seelog logger May 9 00:15:17.248688 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:15:17.250320 amazon-ssm-agent[2076]: New Seelog Logger Creation Complete May 9 00:15:17.250320 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.250320 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.250320 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 processing appconfig overrides May 9 00:15:17.252360 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.252459 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.253749 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 processing appconfig overrides May 9 00:15:17.253749 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.253749 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.253749 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 processing appconfig overrides May 9 00:15:17.253749 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO Proxy environment variables: May 9 00:15:17.261604 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.261604 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:15:17.261604 amazon-ssm-agent[2076]: 2025/05/09 00:15:17 processing appconfig overrides May 9 00:15:17.259487 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:15:17.293045 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:15:17.293351 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:15:17.307838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:15:17.330748 tar[1895]: linux-amd64/README.md May 9 00:15:17.341849 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:15:17.347691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:15:17.353921 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO no_proxy: May 9 00:15:17.359618 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:15:17.363367 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:15:17.364284 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:15:17.452884 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO https_proxy: May 9 00:15:17.551747 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO http_proxy: May 9 00:15:17.650923 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO Checking if agent identity type OnPrem can be assumed May 9 00:15:17.749337 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO Checking if agent identity type EC2 can be assumed May 9 00:15:17.779712 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO Agent will take identity from EC2 May 9 00:15:17.779712 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 00:15:17.779712 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 00:15:17.779712 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 00:15:17.779712 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] Starting Core Agent May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [Registrar] Starting registrar module May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [EC2Identity] EC2 registration was successful. May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [CredentialRefresher] credentialRefresher has started May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [CredentialRefresher] Starting credentials refresher loop May 9 00:15:17.779986 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 9 00:15:17.847808 amazon-ssm-agent[2076]: 2025-05-09 00:15:17 INFO [CredentialRefresher] Next credential rotation will be in 32.133326211383334 minutes May 9 00:15:18.102728 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:15:18.109807 systemd[1]: Started sshd@0-172.31.22.98:22-139.178.68.195:39942.service - OpenSSH per-connection server daemon (139.178.68.195:39942). May 9 00:15:18.313621 sshd[2114]: Accepted publickey for core from 139.178.68.195 port 39942 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:18.315630 sshd-session[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:18.326344 systemd-logind[1885]: New session 1 of user core. May 9 00:15:18.330266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:15:18.341552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:15:18.356775 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:15:18.361625 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:15:18.370942 (systemd)[2118]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:15:18.494681 systemd[2118]: Queued start job for default target default.target. May 9 00:15:18.505410 systemd[2118]: Created slice app.slice - User Application Slice. May 9 00:15:18.505443 systemd[2118]: Reached target paths.target - Paths. May 9 00:15:18.505457 systemd[2118]: Reached target timers.target - Timers. May 9 00:15:18.506769 systemd[2118]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:15:18.519564 systemd[2118]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:15:18.519742 systemd[2118]: Reached target sockets.target - Sockets. May 9 00:15:18.519767 systemd[2118]: Reached target basic.target - Basic System. May 9 00:15:18.519811 systemd[2118]: Reached target default.target - Main User Target. May 9 00:15:18.519842 systemd[2118]: Startup finished in 140ms. May 9 00:15:18.520143 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:15:18.528672 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:15:18.678525 systemd[1]: Started sshd@1-172.31.22.98:22-139.178.68.195:39956.service - OpenSSH per-connection server daemon (139.178.68.195:39956). May 9 00:15:18.792763 amazon-ssm-agent[2076]: 2025-05-09 00:15:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 9 00:15:18.846966 sshd[2129]: Accepted publickey for core from 139.178.68.195 port 39956 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:18.847985 sshd-session[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:18.854093 systemd-logind[1885]: New session 2 of user core. May 9 00:15:18.861665 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:15:18.893342 amazon-ssm-agent[2076]: 2025-05-09 00:15:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2132) started May 9 00:15:18.987912 sshd[2138]: Connection closed by 139.178.68.195 port 39956 May 9 00:15:18.989018 sshd-session[2129]: pam_unix(sshd:session): session closed for user core May 9 00:15:18.992039 systemd-logind[1885]: Session 2 logged out. Waiting for processes to exit. May 9 00:15:18.992468 systemd[1]: sshd@1-172.31.22.98:22-139.178.68.195:39956.service: Deactivated successfully. May 9 00:15:18.994144 amazon-ssm-agent[2076]: 2025-05-09 00:15:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 9 00:15:18.994633 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:15:18.996250 systemd-logind[1885]: Removed session 2. May 9 00:15:19.025016 systemd[1]: Started sshd@2-172.31.22.98:22-139.178.68.195:39966.service - OpenSSH per-connection server daemon (139.178.68.195:39966). May 9 00:15:19.188880 sshd[2148]: Accepted publickey for core from 139.178.68.195 port 39966 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:19.190198 sshd-session[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:19.195790 systemd-logind[1885]: New session 3 of user core. May 9 00:15:19.202372 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:15:19.323703 sshd[2150]: Connection closed by 139.178.68.195 port 39966 May 9 00:15:19.324730 sshd-session[2148]: pam_unix(sshd:session): session closed for user core May 9 00:15:19.328978 systemd[1]: sshd@2-172.31.22.98:22-139.178.68.195:39966.service: Deactivated successfully. May 9 00:15:19.330554 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:15:19.331106 systemd-logind[1885]: Session 3 logged out. Waiting for processes to exit. May 9 00:15:19.332045 systemd-logind[1885]: Removed session 3. May 9 00:15:19.486558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:19.488904 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:15:19.490348 systemd[1]: Startup finished in 625ms (kernel) + 7.190s (initrd) + 7.641s (userspace) = 15.457s. May 9 00:15:19.495613 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:15:19.968781 ntpd[1876]: Listen normally on 7 eth0 [fe80::49a:4eff:fe4d:b11%2]:123 May 9 00:15:19.969359 ntpd[1876]: 9 May 00:15:19 ntpd[1876]: Listen normally on 7 eth0 [fe80::49a:4eff:fe4d:b11%2]:123 May 9 00:15:20.675219 kubelet[2159]: E0509 00:15:20.675108 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:15:20.677416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:15:20.677569 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:15:20.677979 systemd[1]: kubelet.service: Consumed 1.053s CPU time. May 9 00:15:24.047628 systemd-resolved[1819]: Clock change detected. Flushing caches. May 9 00:15:30.434676 systemd[1]: Started sshd@3-172.31.22.98:22-139.178.68.195:38654.service - OpenSSH per-connection server daemon (139.178.68.195:38654). May 9 00:15:30.597392 sshd[2171]: Accepted publickey for core from 139.178.68.195 port 38654 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:30.599091 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:30.603658 systemd-logind[1885]: New session 4 of user core. May 9 00:15:30.610113 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:15:30.729476 sshd[2173]: Connection closed by 139.178.68.195 port 38654 May 9 00:15:30.730218 sshd-session[2171]: pam_unix(sshd:session): session closed for user core May 9 00:15:30.733314 systemd[1]: sshd@3-172.31.22.98:22-139.178.68.195:38654.service: Deactivated successfully. May 9 00:15:30.735044 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:15:30.736183 systemd-logind[1885]: Session 4 logged out. Waiting for processes to exit. May 9 00:15:30.737120 systemd-logind[1885]: Removed session 4. May 9 00:15:30.766182 systemd[1]: Started sshd@4-172.31.22.98:22-139.178.68.195:38658.service - OpenSSH per-connection server daemon (139.178.68.195:38658). May 9 00:15:30.925581 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 38658 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:30.927037 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:30.931177 systemd-logind[1885]: New session 5 of user core. May 9 00:15:30.940159 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:15:31.053259 sshd[2180]: Connection closed by 139.178.68.195 port 38658 May 9 00:15:31.054164 sshd-session[2178]: pam_unix(sshd:session): session closed for user core May 9 00:15:31.058034 systemd[1]: sshd@4-172.31.22.98:22-139.178.68.195:38658.service: Deactivated successfully. May 9 00:15:31.059717 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:15:31.060363 systemd-logind[1885]: Session 5 logged out. Waiting for processes to exit. May 9 00:15:31.061638 systemd-logind[1885]: Removed session 5. May 9 00:15:31.084825 systemd[1]: Started sshd@5-172.31.22.98:22-139.178.68.195:38670.service - OpenSSH per-connection server daemon (139.178.68.195:38670). May 9 00:15:31.253179 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 38670 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:31.254563 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:31.259153 systemd-logind[1885]: New session 6 of user core. May 9 00:15:31.266028 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:15:31.389429 sshd[2187]: Connection closed by 139.178.68.195 port 38670 May 9 00:15:31.390623 sshd-session[2185]: pam_unix(sshd:session): session closed for user core May 9 00:15:31.393627 systemd[1]: sshd@5-172.31.22.98:22-139.178.68.195:38670.service: Deactivated successfully. May 9 00:15:31.395680 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:15:31.397166 systemd-logind[1885]: Session 6 logged out. Waiting for processes to exit. May 9 00:15:31.398328 systemd-logind[1885]: Removed session 6. May 9 00:15:31.419625 systemd[1]: Started sshd@6-172.31.22.98:22-139.178.68.195:38674.service - OpenSSH per-connection server daemon (139.178.68.195:38674). May 9 00:15:31.588262 sshd[2192]: Accepted publickey for core from 139.178.68.195 port 38674 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:31.589623 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:31.594698 systemd-logind[1885]: New session 7 of user core. May 9 00:15:31.609022 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:15:31.721291 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:15:31.721578 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:15:31.733337 sudo[2195]: pam_unix(sudo:session): session closed for user root May 9 00:15:31.756279 sshd[2194]: Connection closed by 139.178.68.195 port 38674 May 9 00:15:31.757296 sshd-session[2192]: pam_unix(sshd:session): session closed for user core May 9 00:15:31.760876 systemd[1]: sshd@6-172.31.22.98:22-139.178.68.195:38674.service: Deactivated successfully. May 9 00:15:31.762914 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:15:31.764100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:15:31.765493 systemd-logind[1885]: Session 7 logged out. Waiting for processes to exit. May 9 00:15:31.770075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:31.771349 systemd-logind[1885]: Removed session 7. May 9 00:15:31.790248 systemd[1]: Started sshd@7-172.31.22.98:22-139.178.68.195:38686.service - OpenSSH per-connection server daemon (139.178.68.195:38686). May 9 00:15:31.964848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:31.969442 sshd[2203]: Accepted publickey for core from 139.178.68.195 port 38686 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:31.969268 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:31.970066 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:15:31.974837 systemd-logind[1885]: New session 8 of user core. May 9 00:15:31.980142 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:15:32.034292 kubelet[2210]: E0509 00:15:32.034253 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:15:32.038337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:15:32.038544 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:15:32.082700 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:15:32.083133 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:15:32.086897 sudo[2219]: pam_unix(sudo:session): session closed for user root May 9 00:15:32.092393 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 00:15:32.092797 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:15:32.107341 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:15:32.138761 augenrules[2241]: No rules May 9 00:15:32.140266 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:15:32.140502 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:15:32.141741 sudo[2218]: pam_unix(sudo:session): session closed for user root May 9 00:15:32.164909 sshd[2215]: Connection closed by 139.178.68.195 port 38686 May 9 00:15:32.165461 sshd-session[2203]: pam_unix(sshd:session): session closed for user core May 9 00:15:32.168752 systemd[1]: sshd@7-172.31.22.98:22-139.178.68.195:38686.service: Deactivated successfully. May 9 00:15:32.170681 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:15:32.172360 systemd-logind[1885]: Session 8 logged out. Waiting for processes to exit. May 9 00:15:32.173511 systemd-logind[1885]: Removed session 8. May 9 00:15:32.205151 systemd[1]: Started sshd@8-172.31.22.98:22-139.178.68.195:38696.service - OpenSSH per-connection server daemon (139.178.68.195:38696). May 9 00:15:32.368986 sshd[2249]: Accepted publickey for core from 139.178.68.195 port 38696 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:15:32.369771 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:32.375335 systemd-logind[1885]: New session 9 of user core. May 9 00:15:32.386064 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:15:32.482637 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:15:32.482986 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:15:33.111233 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:15:33.115069 (dockerd)[2270]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:15:33.651621 dockerd[2270]: time="2025-05-09T00:15:33.651553694Z" level=info msg="Starting up" May 9 00:15:33.828007 dockerd[2270]: time="2025-05-09T00:15:33.827762685Z" level=info msg="Loading containers: start." May 9 00:15:34.040846 kernel: Initializing XFRM netlink socket May 9 00:15:34.094559 (udev-worker)[2294]: Network interface NamePolicy= disabled on kernel command line. May 9 00:15:34.155574 systemd-networkd[1817]: docker0: Link UP May 9 00:15:34.183542 dockerd[2270]: time="2025-05-09T00:15:34.183502345Z" level=info msg="Loading containers: done." May 9 00:15:34.203892 dockerd[2270]: time="2025-05-09T00:15:34.203836313Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:15:34.204084 dockerd[2270]: time="2025-05-09T00:15:34.203957055Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 00:15:34.204133 dockerd[2270]: time="2025-05-09T00:15:34.204111599Z" level=info msg="Daemon has completed initialization" May 9 00:15:34.248672 dockerd[2270]: time="2025-05-09T00:15:34.248547489Z" level=info msg="API listen on /run/docker.sock" May 9 00:15:34.248804 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:15:35.618200 containerd[1903]: time="2025-05-09T00:15:35.618162376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 00:15:36.220085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511370776.mount: Deactivated successfully. May 9 00:15:37.509134 containerd[1903]: time="2025-05-09T00:15:37.509081361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:37.510716 containerd[1903]: time="2025-05-09T00:15:37.510669124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 9 00:15:37.514865 containerd[1903]: time="2025-05-09T00:15:37.512927911Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:37.517945 containerd[1903]: time="2025-05-09T00:15:37.516940686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:37.517945 containerd[1903]: time="2025-05-09T00:15:37.517763415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.899565469s" May 9 00:15:37.517945 containerd[1903]: time="2025-05-09T00:15:37.517809626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 9 00:15:37.518501 containerd[1903]: time="2025-05-09T00:15:37.518482971Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 00:15:39.013944 containerd[1903]: time="2025-05-09T00:15:39.013886340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:39.016351 containerd[1903]: time="2025-05-09T00:15:39.016229702Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 9 00:15:39.020048 containerd[1903]: time="2025-05-09T00:15:39.018736308Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:39.023271 containerd[1903]: time="2025-05-09T00:15:39.023230754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:39.024059 containerd[1903]: time="2025-05-09T00:15:39.024021745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.505323644s" May 9 00:15:39.024059 containerd[1903]: time="2025-05-09T00:15:39.024061347Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 9 00:15:39.024982 containerd[1903]: time="2025-05-09T00:15:39.024948162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 00:15:40.320036 containerd[1903]: time="2025-05-09T00:15:40.319972722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:40.322383 containerd[1903]: time="2025-05-09T00:15:40.322316357Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 9 00:15:40.324903 containerd[1903]: time="2025-05-09T00:15:40.324829081Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:40.328940 containerd[1903]: time="2025-05-09T00:15:40.328874790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:40.329951 containerd[1903]: time="2025-05-09T00:15:40.329812590Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.304830022s" May 9 00:15:40.329951 containerd[1903]: time="2025-05-09T00:15:40.329855758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 9 00:15:40.330596 containerd[1903]: time="2025-05-09T00:15:40.330541783Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 00:15:41.481156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240963202.mount: Deactivated successfully. May 9 00:15:42.043939 containerd[1903]: time="2025-05-09T00:15:42.043884305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:42.046009 containerd[1903]: time="2025-05-09T00:15:42.045950050Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 9 00:15:42.048438 containerd[1903]: time="2025-05-09T00:15:42.048374093Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:42.051950 containerd[1903]: time="2025-05-09T00:15:42.051870624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:42.053905 containerd[1903]: time="2025-05-09T00:15:42.053852239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.723266334s" May 9 00:15:42.053905 containerd[1903]: time="2025-05-09T00:15:42.053902544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 9 00:15:42.054551 containerd[1903]: time="2025-05-09T00:15:42.054520617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 00:15:42.288968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:15:42.294139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:42.499810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:42.512284 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:15:42.559737 kubelet[2540]: E0509 00:15:42.559623 2540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:15:42.562184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:15:42.562386 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:15:42.654679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856266483.mount: Deactivated successfully. May 9 00:15:43.662574 containerd[1903]: time="2025-05-09T00:15:43.662517092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:43.664446 containerd[1903]: time="2025-05-09T00:15:43.664380463Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 9 00:15:43.667237 containerd[1903]: time="2025-05-09T00:15:43.666647179Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:43.671719 containerd[1903]: time="2025-05-09T00:15:43.671670612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:43.672873 containerd[1903]: time="2025-05-09T00:15:43.672830780Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.618269074s" May 9 00:15:43.672873 containerd[1903]: time="2025-05-09T00:15:43.672873487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 9 00:15:43.674378 containerd[1903]: time="2025-05-09T00:15:43.674258428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:15:44.181316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382285979.mount: Deactivated successfully. May 9 00:15:44.190521 containerd[1903]: time="2025-05-09T00:15:44.190440894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:44.192358 containerd[1903]: time="2025-05-09T00:15:44.192170839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 9 00:15:44.195070 containerd[1903]: time="2025-05-09T00:15:44.194973667Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:44.206042 containerd[1903]: time="2025-05-09T00:15:44.205959469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:44.206972 containerd[1903]: time="2025-05-09T00:15:44.206696013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 532.39725ms" May 9 00:15:44.206972 containerd[1903]: time="2025-05-09T00:15:44.206839176Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 9 00:15:44.207766 containerd[1903]: time="2025-05-09T00:15:44.207546452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 00:15:44.818182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599039501.mount: Deactivated successfully. May 9 00:15:47.501761 containerd[1903]: time="2025-05-09T00:15:47.501698389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:47.506929 containerd[1903]: time="2025-05-09T00:15:47.506708013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 9 00:15:47.509525 containerd[1903]: time="2025-05-09T00:15:47.509436972Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:47.516180 containerd[1903]: time="2025-05-09T00:15:47.516105309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:15:47.517083 containerd[1903]: time="2025-05-09T00:15:47.516919819Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.309343276s" May 9 00:15:47.517083 containerd[1903]: time="2025-05-09T00:15:47.516955656Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 9 00:15:47.671768 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 9 00:15:50.049054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:50.055199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:50.091160 systemd[1]: Reloading requested from client PID 2687 ('systemctl') (unit session-9.scope)... May 9 00:15:50.091179 systemd[1]: Reloading... May 9 00:15:50.213811 zram_generator::config[2724]: No configuration found. May 9 00:15:50.381687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:15:50.469162 systemd[1]: Reloading finished in 377 ms. May 9 00:15:50.515984 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:15:50.516081 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:15:50.516374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:50.518329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:50.731618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:50.745260 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:15:50.794632 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:15:50.797930 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:15:50.797930 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:15:50.798878 kubelet[2790]: I0509 00:15:50.798077 2790 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:15:51.154430 kubelet[2790]: I0509 00:15:51.154300 2790 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:15:51.154430 kubelet[2790]: I0509 00:15:51.154336 2790 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:15:51.155204 kubelet[2790]: I0509 00:15:51.154700 2790 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:15:51.209266 kubelet[2790]: E0509 00:15:51.209182 2790 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:51.209559 kubelet[2790]: I0509 00:15:51.209431 2790 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:15:51.232896 kubelet[2790]: E0509 00:15:51.232851 2790 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:15:51.233020 kubelet[2790]: I0509 00:15:51.232972 2790 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:15:51.236992 kubelet[2790]: I0509 00:15:51.236948 2790 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:15:51.242970 kubelet[2790]: I0509 00:15:51.242893 2790 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:15:51.243198 kubelet[2790]: I0509 00:15:51.242968 2790 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:15:51.245219 kubelet[2790]: I0509 00:15:51.245183 2790 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:15:51.245219 kubelet[2790]: I0509 00:15:51.245221 2790 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:15:51.245415 kubelet[2790]: I0509 00:15:51.245393 2790 state_mem.go:36] "Initialized new in-memory state store" May 9 00:15:51.251481 kubelet[2790]: I0509 00:15:51.251443 2790 kubelet.go:446] "Attempting to sync node with API server" May 9 00:15:51.251481 kubelet[2790]: I0509 00:15:51.251483 2790 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:15:51.251638 kubelet[2790]: I0509 00:15:51.251526 2790 kubelet.go:352] "Adding apiserver pod source" May 9 00:15:51.251638 kubelet[2790]: I0509 00:15:51.251544 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:15:51.258878 kubelet[2790]: W0509 00:15:51.258100 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:51.258878 kubelet[2790]: E0509 00:15:51.258157 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:51.258878 kubelet[2790]: W0509 00:15:51.258221 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-98&limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:51.258878 kubelet[2790]: E0509 00:15:51.258246 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-98&limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:51.258878 kubelet[2790]: I0509 00:15:51.258650 2790 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:15:51.262376 kubelet[2790]: I0509 00:15:51.262268 2790 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:15:51.263201 kubelet[2790]: W0509 00:15:51.263176 2790 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:15:51.265644 kubelet[2790]: I0509 00:15:51.265555 2790 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:15:51.265644 kubelet[2790]: I0509 00:15:51.265588 2790 server.go:1287] "Started kubelet" May 9 00:15:51.271417 kubelet[2790]: I0509 00:15:51.270993 2790 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:15:51.271417 kubelet[2790]: I0509 00:15:51.270987 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:15:51.271417 kubelet[2790]: I0509 00:15:51.271345 2790 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:15:51.273160 kubelet[2790]: I0509 00:15:51.273134 2790 server.go:490] "Adding debug handlers to kubelet server" May 9 00:15:51.274182 kubelet[2790]: E0509 00:15:51.272645 2790 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.98:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-98.183db3a4e3afae7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-98,UID:ip-172-31-22-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-98,},FirstTimestamp:2025-05-09 00:15:51.26557043 +0000 UTC m=+0.516671854,LastTimestamp:2025-05-09 00:15:51.26557043 +0000 UTC m=+0.516671854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-98,}" May 9 00:15:51.279302 kubelet[2790]: I0509 00:15:51.279073 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:15:51.280737 kubelet[2790]: I0509 00:15:51.279739 2790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:15:51.283059 kubelet[2790]: E0509 00:15:51.282116 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:51.283059 kubelet[2790]: I0509 00:15:51.282153 2790 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:15:51.285856 kubelet[2790]: I0509 00:15:51.285340 2790 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:15:51.285856 kubelet[2790]: I0509 00:15:51.285392 2790 reconciler.go:26] "Reconciler: start to sync state" May 9 00:15:51.286049 kubelet[2790]: W0509 00:15:51.286010 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:51.286082 kubelet[2790]: E0509 00:15:51.286063 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:51.286276 kubelet[2790]: I0509 00:15:51.286260 2790 factory.go:221] Registration of the systemd container factory successfully May 9 00:15:51.286355 kubelet[2790]: I0509 00:15:51.286340 2790 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:15:51.288991 kubelet[2790]: E0509 00:15:51.288959 2790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-98?timeout=10s\": dial tcp 172.31.22.98:6443: connect: connection refused" interval="200ms" May 9 00:15:51.289362 kubelet[2790]: I0509 00:15:51.289339 2790 factory.go:221] Registration of the containerd container factory successfully May 9 00:15:51.306185 kubelet[2790]: I0509 00:15:51.306096 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:15:51.307741 kubelet[2790]: I0509 00:15:51.307393 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:15:51.307741 kubelet[2790]: I0509 00:15:51.307421 2790 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:15:51.307741 kubelet[2790]: I0509 00:15:51.307441 2790 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:15:51.307741 kubelet[2790]: I0509 00:15:51.307448 2790 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:15:51.307741 kubelet[2790]: E0509 00:15:51.307499 2790 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:15:51.312354 kubelet[2790]: E0509 00:15:51.312321 2790 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:15:51.317871 kubelet[2790]: W0509 00:15:51.317425 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:51.317871 kubelet[2790]: E0509 00:15:51.317484 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:51.319335 kubelet[2790]: I0509 00:15:51.319315 2790 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:15:51.319335 kubelet[2790]: I0509 00:15:51.319331 2790 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:15:51.319458 kubelet[2790]: I0509 00:15:51.319347 2790 state_mem.go:36] "Initialized new in-memory state store" May 9 00:15:51.325122 kubelet[2790]: I0509 00:15:51.325072 2790 policy_none.go:49] "None policy: Start" May 9 00:15:51.325122 kubelet[2790]: I0509 00:15:51.325104 2790 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:15:51.325122 kubelet[2790]: I0509 00:15:51.325118 2790 state_mem.go:35] "Initializing new in-memory state store" May 9 00:15:51.334178 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:15:51.346257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:15:51.352198 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:15:51.356682 kubelet[2790]: I0509 00:15:51.356649 2790 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:15:51.357039 kubelet[2790]: I0509 00:15:51.357024 2790 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:15:51.358546 kubelet[2790]: I0509 00:15:51.357036 2790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:15:51.358546 kubelet[2790]: I0509 00:15:51.357278 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:15:51.359532 kubelet[2790]: E0509 00:15:51.359370 2790 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:15:51.359532 kubelet[2790]: E0509 00:15:51.359408 2790 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-98\" not found" May 9 00:15:51.418536 systemd[1]: Created slice kubepods-burstable-pod3341dc156c580456a14d1f910220fcf6.slice - libcontainer container kubepods-burstable-pod3341dc156c580456a14d1f910220fcf6.slice. May 9 00:15:51.429994 kubelet[2790]: E0509 00:15:51.429801 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:51.432992 systemd[1]: Created slice kubepods-burstable-pod627d4bdbf53c294544ac0cc91cec2c2b.slice - libcontainer container kubepods-burstable-pod627d4bdbf53c294544ac0cc91cec2c2b.slice. May 9 00:15:51.440201 kubelet[2790]: E0509 00:15:51.440164 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:51.443097 systemd[1]: Created slice kubepods-burstable-pod7270f48ed7b07fc7e8f15c73c2dbb6f7.slice - libcontainer container kubepods-burstable-pod7270f48ed7b07fc7e8f15c73c2dbb6f7.slice. May 9 00:15:51.445296 kubelet[2790]: E0509 00:15:51.445268 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:51.459123 kubelet[2790]: I0509 00:15:51.458822 2790 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-22-98" May 9 00:15:51.459236 kubelet[2790]: E0509 00:15:51.459189 2790 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.22.98:6443/api/v1/nodes\": dial tcp 172.31.22.98:6443: connect: connection refused" node="ip-172-31-22-98" May 9 00:15:51.485890 kubelet[2790]: I0509 00:15:51.485854 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/627d4bdbf53c294544ac0cc91cec2c2b-ca-certs\") pod \"kube-apiserver-ip-172-31-22-98\" (UID: \"627d4bdbf53c294544ac0cc91cec2c2b\") " pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:51.486086 kubelet[2790]: I0509 00:15:51.486058 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/627d4bdbf53c294544ac0cc91cec2c2b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-98\" (UID: \"627d4bdbf53c294544ac0cc91cec2c2b\") " pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:51.486086 kubelet[2790]: I0509 00:15:51.486085 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:51.486351 kubelet[2790]: I0509 00:15:51.486102 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:51.486351 kubelet[2790]: I0509 00:15:51.486119 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3341dc156c580456a14d1f910220fcf6-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-98\" (UID: \"3341dc156c580456a14d1f910220fcf6\") " pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:15:51.486351 kubelet[2790]: I0509 00:15:51.486134 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/627d4bdbf53c294544ac0cc91cec2c2b-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-98\" (UID: \"627d4bdbf53c294544ac0cc91cec2c2b\") " pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:51.486351 kubelet[2790]: I0509 00:15:51.486148 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:51.486351 kubelet[2790]: I0509 00:15:51.486162 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:51.486485 kubelet[2790]: I0509 00:15:51.486176 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:51.489451 kubelet[2790]: E0509 00:15:51.489394 2790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-98?timeout=10s\": dial tcp 172.31.22.98:6443: connect: connection refused" interval="400ms" May 9 00:15:51.661058 kubelet[2790]: I0509 00:15:51.661027 2790 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-22-98" May 9 00:15:51.661419 kubelet[2790]: E0509 00:15:51.661385 2790 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.22.98:6443/api/v1/nodes\": dial tcp 172.31.22.98:6443: connect: connection refused" node="ip-172-31-22-98" May 9 00:15:51.731151 containerd[1903]: time="2025-05-09T00:15:51.731008667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-98,Uid:3341dc156c580456a14d1f910220fcf6,Namespace:kube-system,Attempt:0,}" May 9 00:15:51.741921 containerd[1903]: time="2025-05-09T00:15:51.741856104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-98,Uid:627d4bdbf53c294544ac0cc91cec2c2b,Namespace:kube-system,Attempt:0,}" May 9 00:15:51.747168 containerd[1903]: time="2025-05-09T00:15:51.747122519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-98,Uid:7270f48ed7b07fc7e8f15c73c2dbb6f7,Namespace:kube-system,Attempt:0,}" May 9 00:15:51.890636 kubelet[2790]: E0509 00:15:51.890591 2790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-98?timeout=10s\": dial tcp 172.31.22.98:6443: connect: connection refused" interval="800ms" May 9 00:15:52.063289 kubelet[2790]: I0509 00:15:52.063165 2790 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-22-98" May 9 00:15:52.063652 kubelet[2790]: E0509 00:15:52.063550 2790 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.22.98:6443/api/v1/nodes\": dial tcp 172.31.22.98:6443: connect: connection refused" node="ip-172-31-22-98" May 9 00:15:52.264060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961486839.mount: Deactivated successfully. May 9 00:15:52.277688 containerd[1903]: time="2025-05-09T00:15:52.277633991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:15:52.286296 containerd[1903]: time="2025-05-09T00:15:52.286233462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:15:52.288168 containerd[1903]: time="2025-05-09T00:15:52.288123515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:15:52.290559 containerd[1903]: time="2025-05-09T00:15:52.290520024Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:15:52.294388 containerd[1903]: time="2025-05-09T00:15:52.294091155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:15:52.296653 containerd[1903]: time="2025-05-09T00:15:52.296586975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:15:52.299364 containerd[1903]: time="2025-05-09T00:15:52.299311414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:15:52.300350 containerd[1903]: time="2025-05-09T00:15:52.300287761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.165056ms" May 9 00:15:52.301333 containerd[1903]: time="2025-05-09T00:15:52.300972332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:15:52.303821 containerd[1903]: time="2025-05-09T00:15:52.303657871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.68883ms" May 9 00:15:52.314639 containerd[1903]: time="2025-05-09T00:15:52.314510716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.302093ms" May 9 00:15:52.319743 kubelet[2790]: W0509 00:15:52.319679 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-98&limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:52.319910 kubelet[2790]: E0509 00:15:52.319759 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-98&limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:52.413289 kubelet[2790]: W0509 00:15:52.413203 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:52.413289 kubelet[2790]: E0509 00:15:52.413249 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:52.551540 kubelet[2790]: W0509 00:15:52.551480 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:52.551762 kubelet[2790]: E0509 00:15:52.551554 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:52.598511 containerd[1903]: time="2025-05-09T00:15:52.597631749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:15:52.598511 containerd[1903]: time="2025-05-09T00:15:52.597715801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:15:52.598511 containerd[1903]: time="2025-05-09T00:15:52.597741590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:15:52.603836 containerd[1903]: time="2025-05-09T00:15:52.602071520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:15:52.611330 containerd[1903]: time="2025-05-09T00:15:52.610942885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:15:52.611330 containerd[1903]: time="2025-05-09T00:15:52.611016249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:15:52.611330 containerd[1903]: time="2025-05-09T00:15:52.611040669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:15:52.611330 containerd[1903]: time="2025-05-09T00:15:52.611144036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:15:52.613696 containerd[1903]: time="2025-05-09T00:15:52.613525631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:15:52.613696 containerd[1903]: time="2025-05-09T00:15:52.613586066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:15:52.613696 containerd[1903]: time="2025-05-09T00:15:52.613605202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:15:52.615861 containerd[1903]: time="2025-05-09T00:15:52.615453068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:15:52.644133 systemd[1]: Started cri-containerd-dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85.scope - libcontainer container dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85. May 9 00:15:52.655645 systemd[1]: Started cri-containerd-bef59b4353af318c3e6db11ddec1bcac6d28a064330e4c926ee91814bc191458.scope - libcontainer container bef59b4353af318c3e6db11ddec1bcac6d28a064330e4c926ee91814bc191458. May 9 00:15:52.659767 systemd[1]: Started cri-containerd-234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8.scope - libcontainer container 234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8. May 9 00:15:52.692616 kubelet[2790]: E0509 00:15:52.692559 2790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-98?timeout=10s\": dial tcp 172.31.22.98:6443: connect: connection refused" interval="1.6s" May 9 00:15:52.710652 containerd[1903]: time="2025-05-09T00:15:52.710600541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-98,Uid:3341dc156c580456a14d1f910220fcf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8\"" May 9 00:15:52.716269 containerd[1903]: time="2025-05-09T00:15:52.716227961Z" level=info msg="CreateContainer within sandbox \"234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:15:52.746074 containerd[1903]: time="2025-05-09T00:15:52.745933334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-98,Uid:627d4bdbf53c294544ac0cc91cec2c2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bef59b4353af318c3e6db11ddec1bcac6d28a064330e4c926ee91814bc191458\"" May 9 00:15:52.752638 containerd[1903]: time="2025-05-09T00:15:52.751348317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-98,Uid:7270f48ed7b07fc7e8f15c73c2dbb6f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85\"" May 9 00:15:52.755865 containerd[1903]: time="2025-05-09T00:15:52.755769727Z" level=info msg="CreateContainer within sandbox \"bef59b4353af318c3e6db11ddec1bcac6d28a064330e4c926ee91814bc191458\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:15:52.758573 containerd[1903]: time="2025-05-09T00:15:52.758530481Z" level=info msg="CreateContainer within sandbox \"234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583\"" May 9 00:15:52.759321 containerd[1903]: time="2025-05-09T00:15:52.759281316Z" level=info msg="StartContainer for \"89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583\"" May 9 00:15:52.761272 containerd[1903]: time="2025-05-09T00:15:52.761113024Z" level=info msg="CreateContainer within sandbox \"dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:15:52.770035 kubelet[2790]: W0509 00:15:52.769884 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:52.770035 kubelet[2790]: E0509 00:15:52.769990 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:52.794110 systemd[1]: Started cri-containerd-89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583.scope - libcontainer container 89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583. May 9 00:15:52.795769 containerd[1903]: time="2025-05-09T00:15:52.795697358Z" level=info msg="CreateContainer within sandbox \"bef59b4353af318c3e6db11ddec1bcac6d28a064330e4c926ee91814bc191458\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a8ec2912a562127df8f4aab587336d1a1b8f940ccd634a95304dd30e0ebace1\"" May 9 00:15:52.796814 containerd[1903]: time="2025-05-09T00:15:52.796493776Z" level=info msg="StartContainer for \"0a8ec2912a562127df8f4aab587336d1a1b8f940ccd634a95304dd30e0ebace1\"" May 9 00:15:52.803936 containerd[1903]: time="2025-05-09T00:15:52.803884043Z" level=info msg="CreateContainer within sandbox \"dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a\"" May 9 00:15:52.806292 containerd[1903]: time="2025-05-09T00:15:52.806258314Z" level=info msg="StartContainer for \"f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a\"" May 9 00:15:52.853050 systemd[1]: Started cri-containerd-0a8ec2912a562127df8f4aab587336d1a1b8f940ccd634a95304dd30e0ebace1.scope - libcontainer container 0a8ec2912a562127df8f4aab587336d1a1b8f940ccd634a95304dd30e0ebace1. May 9 00:15:52.870377 kubelet[2790]: I0509 00:15:52.869918 2790 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-22-98" May 9 00:15:52.870377 kubelet[2790]: E0509 00:15:52.870326 2790 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.22.98:6443/api/v1/nodes\": dial tcp 172.31.22.98:6443: connect: connection refused" node="ip-172-31-22-98" May 9 00:15:52.890440 systemd[1]: Started cri-containerd-f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a.scope - libcontainer container f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a. May 9 00:15:52.901575 containerd[1903]: time="2025-05-09T00:15:52.900695039Z" level=info msg="StartContainer for \"89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583\" returns successfully" May 9 00:15:52.948772 containerd[1903]: time="2025-05-09T00:15:52.948716935Z" level=info msg="StartContainer for \"0a8ec2912a562127df8f4aab587336d1a1b8f940ccd634a95304dd30e0ebace1\" returns successfully" May 9 00:15:52.984668 containerd[1903]: time="2025-05-09T00:15:52.984612320Z" level=info msg="StartContainer for \"f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a\" returns successfully" May 9 00:15:53.326728 kubelet[2790]: E0509 00:15:53.326584 2790 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:53.353000 kubelet[2790]: E0509 00:15:53.352968 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:53.354656 kubelet[2790]: E0509 00:15:53.354625 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:53.358633 kubelet[2790]: E0509 00:15:53.358606 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:53.930833 kubelet[2790]: W0509 00:15:53.930667 2790 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-98&limit=500&resourceVersion=0": dial tcp 172.31.22.98:6443: connect: connection refused May 9 00:15:53.930833 kubelet[2790]: E0509 00:15:53.930762 2790 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-98&limit=500&resourceVersion=0\": dial tcp 172.31.22.98:6443: connect: connection refused" logger="UnhandledError" May 9 00:15:54.359901 kubelet[2790]: E0509 00:15:54.359648 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:54.360457 kubelet[2790]: E0509 00:15:54.360417 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:54.472467 kubelet[2790]: I0509 00:15:54.472150 2790 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-22-98" May 9 00:15:55.363899 kubelet[2790]: E0509 00:15:55.361192 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:55.788307 kubelet[2790]: E0509 00:15:55.788140 2790 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:55.973312 kubelet[2790]: I0509 00:15:55.973272 2790 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-22-98" May 9 00:15:55.973312 kubelet[2790]: E0509 00:15:55.973314 2790 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-98\": node \"ip-172-31-22-98\" not found" May 9 00:15:55.976992 kubelet[2790]: E0509 00:15:55.976902 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.077197 kubelet[2790]: E0509 00:15:56.077064 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.177204 kubelet[2790]: E0509 00:15:56.177158 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.278054 kubelet[2790]: E0509 00:15:56.278015 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.379103 kubelet[2790]: E0509 00:15:56.378977 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.465645 kubelet[2790]: E0509 00:15:56.465618 2790 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-98\" not found" node="ip-172-31-22-98" May 9 00:15:56.479839 kubelet[2790]: E0509 00:15:56.479767 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.580750 kubelet[2790]: E0509 00:15:56.580700 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.681817 kubelet[2790]: E0509 00:15:56.681676 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.782070 kubelet[2790]: E0509 00:15:56.782032 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.883239 kubelet[2790]: E0509 00:15:56.883197 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:56.984176 kubelet[2790]: E0509 00:15:56.984011 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:57.085135 kubelet[2790]: E0509 00:15:57.085095 2790 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:57.189522 kubelet[2790]: I0509 00:15:57.189480 2790 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:57.203307 kubelet[2790]: I0509 00:15:57.203204 2790 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:15:57.211560 kubelet[2790]: I0509 00:15:57.211524 2790 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:57.260231 kubelet[2790]: I0509 00:15:57.260129 2790 apiserver.go:52] "Watching apiserver" May 9 00:15:57.286401 kubelet[2790]: I0509 00:15:57.286356 2790 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:15:58.203272 systemd[1]: Reloading requested from client PID 3067 ('systemctl') (unit session-9.scope)... May 9 00:15:58.203291 systemd[1]: Reloading... May 9 00:15:58.301819 zram_generator::config[3110]: No configuration found. May 9 00:15:58.428302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:15:58.535163 systemd[1]: Reloading finished in 331 ms. May 9 00:15:58.578459 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:58.602322 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:15:58.602555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:58.606279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:15:58.903349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:15:58.916248 (kubelet)[3167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:15:58.986124 kubelet[3167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:15:58.986124 kubelet[3167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:15:58.986124 kubelet[3167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:15:58.987917 kubelet[3167]: I0509 00:15:58.986487 3167 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:15:58.998213 kubelet[3167]: I0509 00:15:58.998176 3167 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:15:58.998502 kubelet[3167]: I0509 00:15:58.998361 3167 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:15:58.999393 kubelet[3167]: I0509 00:15:58.999371 3167 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:15:59.000656 kubelet[3167]: I0509 00:15:59.000636 3167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:15:59.004250 kubelet[3167]: I0509 00:15:59.004084 3167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:15:59.007355 kubelet[3167]: E0509 00:15:59.007304 3167 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:15:59.007355 kubelet[3167]: I0509 00:15:59.007350 3167 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:15:59.009623 kubelet[3167]: I0509 00:15:59.009597 3167 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:15:59.009833 kubelet[3167]: I0509 00:15:59.009808 3167 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:15:59.009992 kubelet[3167]: I0509 00:15:59.009831 3167 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:15:59.009992 kubelet[3167]: I0509 00:15:59.009987 3167 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:15:59.010117 kubelet[3167]: I0509 00:15:59.009996 3167 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:15:59.010117 kubelet[3167]: I0509 00:15:59.010032 3167 state_mem.go:36] "Initialized new in-memory state store" May 9 00:15:59.010195 kubelet[3167]: I0509 00:15:59.010152 3167 kubelet.go:446] "Attempting to sync node with API server" May 9 00:15:59.010195 kubelet[3167]: I0509 00:15:59.010163 3167 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:15:59.010195 kubelet[3167]: I0509 00:15:59.010178 3167 kubelet.go:352] "Adding apiserver pod source" May 9 00:15:59.010195 kubelet[3167]: I0509 00:15:59.010188 3167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:15:59.011898 kubelet[3167]: I0509 00:15:59.011882 3167 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:15:59.012761 kubelet[3167]: I0509 00:15:59.012747 3167 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:15:59.013806 kubelet[3167]: I0509 00:15:59.013770 3167 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:15:59.013915 kubelet[3167]: I0509 00:15:59.013899 3167 server.go:1287] "Started kubelet" May 9 00:15:59.030227 kubelet[3167]: I0509 00:15:59.030164 3167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:15:59.033157 kubelet[3167]: I0509 00:15:59.033114 3167 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:15:59.038014 kubelet[3167]: I0509 00:15:59.037985 3167 server.go:490] "Adding debug handlers to kubelet server" May 9 00:15:59.040449 kubelet[3167]: I0509 00:15:59.040372 3167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:15:59.040674 kubelet[3167]: I0509 00:15:59.040653 3167 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:15:59.040978 kubelet[3167]: I0509 00:15:59.040958 3167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:15:59.041255 kubelet[3167]: I0509 00:15:59.041236 3167 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:15:59.041508 kubelet[3167]: E0509 00:15:59.041486 3167 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-22-98\" not found" May 9 00:15:59.042429 kubelet[3167]: I0509 00:15:59.042395 3167 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:15:59.042557 kubelet[3167]: I0509 00:15:59.042543 3167 reconciler.go:26] "Reconciler: start to sync state" May 9 00:15:59.049808 kubelet[3167]: I0509 00:15:59.047999 3167 factory.go:221] Registration of the systemd container factory successfully May 9 00:15:59.049808 kubelet[3167]: I0509 00:15:59.048122 3167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:15:59.051099 kubelet[3167]: E0509 00:15:59.051079 3167 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:15:59.052099 kubelet[3167]: I0509 00:15:59.052084 3167 factory.go:221] Registration of the containerd container factory successfully May 9 00:15:59.060714 kubelet[3167]: I0509 00:15:59.060666 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:15:59.064811 kubelet[3167]: I0509 00:15:59.062022 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:15:59.064811 kubelet[3167]: I0509 00:15:59.062060 3167 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:15:59.064811 kubelet[3167]: I0509 00:15:59.062083 3167 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:15:59.064811 kubelet[3167]: I0509 00:15:59.062092 3167 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:15:59.064811 kubelet[3167]: E0509 00:15:59.062141 3167 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:15:59.126569 kubelet[3167]: I0509 00:15:59.126540 3167 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:15:59.126569 kubelet[3167]: I0509 00:15:59.126557 3167 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:15:59.126569 kubelet[3167]: I0509 00:15:59.126579 3167 state_mem.go:36] "Initialized new in-memory state store" May 9 00:15:59.126988 kubelet[3167]: I0509 00:15:59.126962 3167 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:15:59.127124 kubelet[3167]: I0509 00:15:59.126982 3167 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:15:59.127124 kubelet[3167]: I0509 00:15:59.127010 3167 policy_none.go:49] "None policy: Start" May 9 00:15:59.127124 kubelet[3167]: I0509 00:15:59.127023 3167 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:15:59.127124 kubelet[3167]: I0509 00:15:59.127037 3167 state_mem.go:35] "Initializing new in-memory state store" May 9 00:15:59.127499 kubelet[3167]: I0509 00:15:59.127473 3167 state_mem.go:75] "Updated machine memory state" May 9 00:15:59.137107 kubelet[3167]: I0509 00:15:59.136260 3167 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:15:59.137107 kubelet[3167]: I0509 00:15:59.136452 3167 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:15:59.137107 kubelet[3167]: I0509 00:15:59.136465 3167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:15:59.137107 kubelet[3167]: I0509 00:15:59.136893 3167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:15:59.139064 kubelet[3167]: E0509 00:15:59.139045 3167 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:15:59.164712 kubelet[3167]: I0509 00:15:59.164601 3167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:59.167649 kubelet[3167]: I0509 00:15:59.166970 3167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.168717 kubelet[3167]: I0509 00:15:59.168541 3167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:15:59.177340 kubelet[3167]: E0509 00:15:59.177306 3167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-98\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:59.177851 kubelet[3167]: E0509 00:15:59.177773 3167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-98\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:15:59.178028 kubelet[3167]: E0509 00:15:59.178013 3167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-98\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.216445 sudo[3198]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:15:59.216753 sudo[3198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:15:59.241614 kubelet[3167]: I0509 00:15:59.239437 3167 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-22-98" May 9 00:15:59.244413 kubelet[3167]: I0509 00:15:59.244319 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/627d4bdbf53c294544ac0cc91cec2c2b-ca-certs\") pod \"kube-apiserver-ip-172-31-22-98\" (UID: \"627d4bdbf53c294544ac0cc91cec2c2b\") " pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:59.244680 kubelet[3167]: I0509 00:15:59.244609 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/627d4bdbf53c294544ac0cc91cec2c2b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-98\" (UID: \"627d4bdbf53c294544ac0cc91cec2c2b\") " pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:59.244815 kubelet[3167]: I0509 00:15:59.244771 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.245066 kubelet[3167]: I0509 00:15:59.244942 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.245066 kubelet[3167]: I0509 00:15:59.245024 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.245424 kubelet[3167]: I0509 00:15:59.245228 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.245424 kubelet[3167]: I0509 00:15:59.245298 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3341dc156c580456a14d1f910220fcf6-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-98\" (UID: \"3341dc156c580456a14d1f910220fcf6\") " pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:15:59.245424 kubelet[3167]: I0509 00:15:59.245324 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/627d4bdbf53c294544ac0cc91cec2c2b-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-98\" (UID: \"627d4bdbf53c294544ac0cc91cec2c2b\") " pod="kube-system/kube-apiserver-ip-172-31-22-98" May 9 00:15:59.245424 kubelet[3167]: I0509 00:15:59.245378 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7270f48ed7b07fc7e8f15c73c2dbb6f7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-98\" (UID: \"7270f48ed7b07fc7e8f15c73c2dbb6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:15:59.251431 kubelet[3167]: I0509 00:15:59.251209 3167 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-22-98" May 9 00:15:59.251431 kubelet[3167]: I0509 00:15:59.251306 3167 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-22-98" May 9 00:16:00.012134 kubelet[3167]: I0509 00:16:00.011529 3167 apiserver.go:52] "Watching apiserver" May 9 00:16:00.043591 kubelet[3167]: I0509 00:16:00.042929 3167 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:16:00.089621 kubelet[3167]: I0509 00:16:00.089525 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-98" podStartSLOduration=3.089480377 podStartE2EDuration="3.089480377s" podCreationTimestamp="2025-05-09 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:16:00.086113853 +0000 UTC m=+1.160878250" watchObservedRunningTime="2025-05-09 00:16:00.089480377 +0000 UTC m=+1.164244769" May 9 00:16:00.090192 sudo[3198]: pam_unix(sudo:session): session closed for user root May 9 00:16:00.111632 kubelet[3167]: I0509 00:16:00.111592 3167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:16:00.112140 kubelet[3167]: I0509 00:16:00.112110 3167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:16:00.126212 kubelet[3167]: E0509 00:16:00.126165 3167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-98\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-98" May 9 00:16:00.128052 kubelet[3167]: E0509 00:16:00.128000 3167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-98\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-98" May 9 00:16:00.135223 kubelet[3167]: I0509 00:16:00.135159 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-98" podStartSLOduration=3.135136152 podStartE2EDuration="3.135136152s" podCreationTimestamp="2025-05-09 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:16:00.10908333 +0000 UTC m=+1.183847801" watchObservedRunningTime="2025-05-09 00:16:00.135136152 +0000 UTC m=+1.209900570" May 9 00:16:00.157020 kubelet[3167]: I0509 00:16:00.156563 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-98" podStartSLOduration=3.156541478 podStartE2EDuration="3.156541478s" podCreationTimestamp="2025-05-09 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:16:00.136469661 +0000 UTC m=+1.211234068" watchObservedRunningTime="2025-05-09 00:16:00.156541478 +0000 UTC m=+1.231305876" May 9 00:16:02.124395 update_engine[1887]: I20250509 00:16:02.117276 1887 update_attempter.cc:509] Updating boot flags... May 9 00:16:02.919319 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3234) May 9 00:16:03.249811 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3237) May 9 00:16:03.504815 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3237) May 9 00:16:04.164519 sudo[2252]: pam_unix(sudo:session): session closed for user root May 9 00:16:04.188342 sshd[2251]: Connection closed by 139.178.68.195 port 38696 May 9 00:16:04.189716 sshd-session[2249]: pam_unix(sshd:session): session closed for user core May 9 00:16:04.197471 systemd[1]: sshd@8-172.31.22.98:22-139.178.68.195:38696.service: Deactivated successfully. May 9 00:16:04.202219 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:16:04.202435 systemd[1]: session-9.scope: Consumed 4.928s CPU time, 135.5M memory peak, 0B memory swap peak. May 9 00:16:04.204569 systemd-logind[1885]: Session 9 logged out. Waiting for processes to exit. May 9 00:16:04.206353 systemd-logind[1885]: Removed session 9. May 9 00:16:04.257880 kubelet[3167]: I0509 00:16:04.257843 3167 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:16:04.259150 containerd[1903]: time="2025-05-09T00:16:04.258641616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:16:04.259552 kubelet[3167]: I0509 00:16:04.258940 3167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:16:05.333757 systemd[1]: Created slice kubepods-besteffort-podb3e558bf_2a9d_4dae_adf9_ab1b5be8d4dd.slice - libcontainer container kubepods-besteffort-podb3e558bf_2a9d_4dae_adf9_ab1b5be8d4dd.slice. May 9 00:16:05.346342 systemd[1]: Created slice kubepods-burstable-pod6b9eb4bb_3b6d_40c9_ac68_c052729f1705.slice - libcontainer container kubepods-burstable-pod6b9eb4bb_3b6d_40c9_ac68_c052729f1705.slice. May 9 00:16:05.366979 kubelet[3167]: I0509 00:16:05.366501 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-etc-cni-netd\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.366979 kubelet[3167]: I0509 00:16:05.366547 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-run\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.366979 kubelet[3167]: I0509 00:16:05.366570 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-clustermesh-secrets\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.366979 kubelet[3167]: I0509 00:16:05.366589 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277wr\" (UniqueName: \"kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-kube-api-access-277wr\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.366979 kubelet[3167]: I0509 00:16:05.366607 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hostproc\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.366979 kubelet[3167]: I0509 00:16:05.366622 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-lib-modules\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368315 kubelet[3167]: I0509 00:16:05.366644 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-xtables-lock\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368315 kubelet[3167]: I0509 00:16:05.366771 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-config-path\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368315 kubelet[3167]: I0509 00:16:05.366912 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd-kube-proxy\") pod \"kube-proxy-86znc\" (UID: \"b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd\") " pod="kube-system/kube-proxy-86znc" May 9 00:16:05.368315 kubelet[3167]: I0509 00:16:05.367775 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5clp\" (UniqueName: \"kubernetes.io/projected/b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd-kube-api-access-l5clp\") pod \"kube-proxy-86znc\" (UID: \"b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd\") " pod="kube-system/kube-proxy-86znc" May 9 00:16:05.368315 kubelet[3167]: I0509 00:16:05.367845 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-bpf-maps\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368315 kubelet[3167]: I0509 00:16:05.367886 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cni-path\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368561 kubelet[3167]: I0509 00:16:05.367914 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-net\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368561 kubelet[3167]: I0509 00:16:05.367958 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-kernel\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368561 kubelet[3167]: I0509 00:16:05.367984 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd-xtables-lock\") pod \"kube-proxy-86znc\" (UID: \"b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd\") " pod="kube-system/kube-proxy-86znc" May 9 00:16:05.368561 kubelet[3167]: I0509 00:16:05.368006 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-cgroup\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368561 kubelet[3167]: I0509 00:16:05.368042 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hubble-tls\") pod \"cilium-wmbg9\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " pod="kube-system/cilium-wmbg9" May 9 00:16:05.368561 kubelet[3167]: I0509 00:16:05.368063 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd-lib-modules\") pod \"kube-proxy-86znc\" (UID: \"b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd\") " pod="kube-system/kube-proxy-86znc" May 9 00:16:05.447323 systemd[1]: Created slice kubepods-besteffort-pod1f69f19e_9981_46c3_98c4_f7093b58bb75.slice - libcontainer container kubepods-besteffort-pod1f69f19e_9981_46c3_98c4_f7093b58bb75.slice. May 9 00:16:05.569732 kubelet[3167]: I0509 00:16:05.569675 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f69f19e-9981-46c3-98c4-f7093b58bb75-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k9xv8\" (UID: \"1f69f19e-9981-46c3-98c4-f7093b58bb75\") " pod="kube-system/cilium-operator-6c4d7847fc-k9xv8" May 9 00:16:05.569732 kubelet[3167]: I0509 00:16:05.569729 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brh4g\" (UniqueName: \"kubernetes.io/projected/1f69f19e-9981-46c3-98c4-f7093b58bb75-kube-api-access-brh4g\") pod \"cilium-operator-6c4d7847fc-k9xv8\" (UID: \"1f69f19e-9981-46c3-98c4-f7093b58bb75\") " pod="kube-system/cilium-operator-6c4d7847fc-k9xv8" May 9 00:16:05.643839 containerd[1903]: time="2025-05-09T00:16:05.643773861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86znc,Uid:b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd,Namespace:kube-system,Attempt:0,}" May 9 00:16:05.654160 containerd[1903]: time="2025-05-09T00:16:05.654091466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wmbg9,Uid:6b9eb4bb-3b6d-40c9-ac68-c052729f1705,Namespace:kube-system,Attempt:0,}" May 9 00:16:05.696294 containerd[1903]: time="2025-05-09T00:16:05.695574011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:16:05.698309 containerd[1903]: time="2025-05-09T00:16:05.698173092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:16:05.698644 containerd[1903]: time="2025-05-09T00:16:05.698288693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:05.699824 containerd[1903]: time="2025-05-09T00:16:05.698897348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:05.700286 containerd[1903]: time="2025-05-09T00:16:05.700182278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:16:05.700286 containerd[1903]: time="2025-05-09T00:16:05.700243824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:16:05.700532 containerd[1903]: time="2025-05-09T00:16:05.700268410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:05.700532 containerd[1903]: time="2025-05-09T00:16:05.700375023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:05.737206 systemd[1]: Started cri-containerd-77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e.scope - libcontainer container 77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e. May 9 00:16:05.740740 systemd[1]: Started cri-containerd-f78eacdbe7f383f1c9830d0b89b68c53632563a9ed7a02268df69606927b4d1c.scope - libcontainer container f78eacdbe7f383f1c9830d0b89b68c53632563a9ed7a02268df69606927b4d1c. May 9 00:16:05.752838 containerd[1903]: time="2025-05-09T00:16:05.752581370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k9xv8,Uid:1f69f19e-9981-46c3-98c4-f7093b58bb75,Namespace:kube-system,Attempt:0,}" May 9 00:16:05.792845 containerd[1903]: time="2025-05-09T00:16:05.792732196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86znc,Uid:b3e558bf-2a9d-4dae-adf9-ab1b5be8d4dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f78eacdbe7f383f1c9830d0b89b68c53632563a9ed7a02268df69606927b4d1c\"" May 9 00:16:05.799224 containerd[1903]: time="2025-05-09T00:16:05.799073780Z" level=info msg="CreateContainer within sandbox \"f78eacdbe7f383f1c9830d0b89b68c53632563a9ed7a02268df69606927b4d1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:16:05.813443 containerd[1903]: time="2025-05-09T00:16:05.813308330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:16:05.813793 containerd[1903]: time="2025-05-09T00:16:05.813650172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:16:05.813793 containerd[1903]: time="2025-05-09T00:16:05.813718177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:05.814206 containerd[1903]: time="2025-05-09T00:16:05.814100602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:05.817056 containerd[1903]: time="2025-05-09T00:16:05.817019722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wmbg9,Uid:6b9eb4bb-3b6d-40c9-ac68-c052729f1705,Namespace:kube-system,Attempt:0,} returns sandbox id \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\"" May 9 00:16:05.821565 containerd[1903]: time="2025-05-09T00:16:05.821252950Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:16:05.831950 containerd[1903]: time="2025-05-09T00:16:05.831768216Z" level=info msg="CreateContainer within sandbox \"f78eacdbe7f383f1c9830d0b89b68c53632563a9ed7a02268df69606927b4d1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"722f1e66cfbfd8ccc6d253f8afdcdc5eb385f5b7f737910295d5ed2de84a4126\"" May 9 00:16:05.833341 containerd[1903]: time="2025-05-09T00:16:05.833309291Z" level=info msg="StartContainer for \"722f1e66cfbfd8ccc6d253f8afdcdc5eb385f5b7f737910295d5ed2de84a4126\"" May 9 00:16:05.844048 systemd[1]: Started cri-containerd-b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4.scope - libcontainer container b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4. May 9 00:16:05.885054 systemd[1]: Started cri-containerd-722f1e66cfbfd8ccc6d253f8afdcdc5eb385f5b7f737910295d5ed2de84a4126.scope - libcontainer container 722f1e66cfbfd8ccc6d253f8afdcdc5eb385f5b7f737910295d5ed2de84a4126. May 9 00:16:05.926109 containerd[1903]: time="2025-05-09T00:16:05.925589716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k9xv8,Uid:1f69f19e-9981-46c3-98c4-f7093b58bb75,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\"" May 9 00:16:05.943814 containerd[1903]: time="2025-05-09T00:16:05.943752486Z" level=info msg="StartContainer for \"722f1e66cfbfd8ccc6d253f8afdcdc5eb385f5b7f737910295d5ed2de84a4126\" returns successfully" May 9 00:16:07.125351 kubelet[3167]: I0509 00:16:07.124511 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86znc" podStartSLOduration=2.124448319 podStartE2EDuration="2.124448319s" podCreationTimestamp="2025-05-09 00:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:16:06.201779592 +0000 UTC m=+7.276543991" watchObservedRunningTime="2025-05-09 00:16:07.124448319 +0000 UTC m=+8.199212719" May 9 00:16:12.844974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742326896.mount: Deactivated successfully. May 9 00:16:15.437526 containerd[1903]: time="2025-05-09T00:16:15.437345301Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:16:15.439238 containerd[1903]: time="2025-05-09T00:16:15.439158617Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:16:15.447394 containerd[1903]: time="2025-05-09T00:16:15.447327326Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:16:15.449299 containerd[1903]: time="2025-05-09T00:16:15.448694574Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.627397921s" May 9 00:16:15.449299 containerd[1903]: time="2025-05-09T00:16:15.448731428Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:16:15.450481 containerd[1903]: time="2025-05-09T00:16:15.450446407Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:16:15.458005 containerd[1903]: time="2025-05-09T00:16:15.453754543Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:16:15.534747 containerd[1903]: time="2025-05-09T00:16:15.534563550Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\"" May 9 00:16:15.537658 containerd[1903]: time="2025-05-09T00:16:15.537614161Z" level=info msg="StartContainer for \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\"" May 9 00:16:15.873013 systemd[1]: Started cri-containerd-670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18.scope - libcontainer container 670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18. May 9 00:16:15.915673 containerd[1903]: time="2025-05-09T00:16:15.915623222Z" level=info msg="StartContainer for \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\" returns successfully" May 9 00:16:15.929258 systemd[1]: cri-containerd-670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18.scope: Deactivated successfully. May 9 00:16:16.181382 containerd[1903]: time="2025-05-09T00:16:16.156767096Z" level=info msg="shim disconnected" id=670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18 namespace=k8s.io May 9 00:16:16.181382 containerd[1903]: time="2025-05-09T00:16:16.181055041Z" level=warning msg="cleaning up after shim disconnected" id=670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18 namespace=k8s.io May 9 00:16:16.181382 containerd[1903]: time="2025-05-09T00:16:16.181075440Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:16:16.226862 containerd[1903]: time="2025-05-09T00:16:16.226814699Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:16:16.242032 containerd[1903]: time="2025-05-09T00:16:16.241976005Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\"" May 9 00:16:16.243214 containerd[1903]: time="2025-05-09T00:16:16.243157267Z" level=info msg="StartContainer for \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\"" May 9 00:16:16.287020 systemd[1]: Started cri-containerd-80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751.scope - libcontainer container 80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751. May 9 00:16:16.340399 containerd[1903]: time="2025-05-09T00:16:16.340206616Z" level=info msg="StartContainer for \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\" returns successfully" May 9 00:16:16.351857 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:16:16.352310 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:16:16.352391 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:16:16.359125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:16:16.359422 systemd[1]: cri-containerd-80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751.scope: Deactivated successfully. May 9 00:16:16.405274 containerd[1903]: time="2025-05-09T00:16:16.405223068Z" level=info msg="shim disconnected" id=80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751 namespace=k8s.io May 9 00:16:16.405274 containerd[1903]: time="2025-05-09T00:16:16.405270229Z" level=warning msg="cleaning up after shim disconnected" id=80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751 namespace=k8s.io May 9 00:16:16.405274 containerd[1903]: time="2025-05-09T00:16:16.405278453Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:16:16.410421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:16:16.520104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18-rootfs.mount: Deactivated successfully. May 9 00:16:16.778975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700386800.mount: Deactivated successfully. May 9 00:16:17.235696 containerd[1903]: time="2025-05-09T00:16:17.235531957Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:16:17.281604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824945324.mount: Deactivated successfully. May 9 00:16:17.290216 containerd[1903]: time="2025-05-09T00:16:17.289563345Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\"" May 9 00:16:17.291836 containerd[1903]: time="2025-05-09T00:16:17.290464190Z" level=info msg="StartContainer for \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\"" May 9 00:16:17.342220 systemd[1]: Started cri-containerd-675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52.scope - libcontainer container 675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52. May 9 00:16:17.397849 containerd[1903]: time="2025-05-09T00:16:17.397778351Z" level=info msg="StartContainer for \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\" returns successfully" May 9 00:16:17.409509 systemd[1]: cri-containerd-675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52.scope: Deactivated successfully. May 9 00:16:17.485973 containerd[1903]: time="2025-05-09T00:16:17.485630438Z" level=info msg="shim disconnected" id=675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52 namespace=k8s.io May 9 00:16:17.485973 containerd[1903]: time="2025-05-09T00:16:17.485694346Z" level=warning msg="cleaning up after shim disconnected" id=675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52 namespace=k8s.io May 9 00:16:17.485973 containerd[1903]: time="2025-05-09T00:16:17.485708825Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:16:17.947010 containerd[1903]: time="2025-05-09T00:16:17.946955531Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:16:17.949092 containerd[1903]: time="2025-05-09T00:16:17.948894232Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:16:17.952805 containerd[1903]: time="2025-05-09T00:16:17.951211802Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:16:17.954473 containerd[1903]: time="2025-05-09T00:16:17.954442182Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.503955132s" May 9 00:16:17.954585 containerd[1903]: time="2025-05-09T00:16:17.954568763Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:16:17.959866 containerd[1903]: time="2025-05-09T00:16:17.959835371Z" level=info msg="CreateContainer within sandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:16:17.985625 containerd[1903]: time="2025-05-09T00:16:17.985576976Z" level=info msg="CreateContainer within sandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\"" May 9 00:16:17.986468 containerd[1903]: time="2025-05-09T00:16:17.986299254Z" level=info msg="StartContainer for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\"" May 9 00:16:18.027299 systemd[1]: Started cri-containerd-7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd.scope - libcontainer container 7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd. May 9 00:16:18.061117 containerd[1903]: time="2025-05-09T00:16:18.061071654Z" level=info msg="StartContainer for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" returns successfully" May 9 00:16:18.240176 containerd[1903]: time="2025-05-09T00:16:18.240065905Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:16:18.318912 containerd[1903]: time="2025-05-09T00:16:18.318857975Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\"" May 9 00:16:18.319649 containerd[1903]: time="2025-05-09T00:16:18.319619012Z" level=info msg="StartContainer for \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\"" May 9 00:16:18.394218 systemd[1]: Started cri-containerd-85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d.scope - libcontainer container 85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d. May 9 00:16:18.453223 systemd[1]: cri-containerd-85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d.scope: Deactivated successfully. May 9 00:16:18.454110 containerd[1903]: time="2025-05-09T00:16:18.454069736Z" level=info msg="StartContainer for \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\" returns successfully" May 9 00:16:18.570136 kubelet[3167]: I0509 00:16:18.569776 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k9xv8" podStartSLOduration=1.542717616 podStartE2EDuration="13.569744076s" podCreationTimestamp="2025-05-09 00:16:05 +0000 UTC" firstStartedPulling="2025-05-09 00:16:05.929780349 +0000 UTC m=+7.004544741" lastFinishedPulling="2025-05-09 00:16:17.956806822 +0000 UTC m=+19.031571201" observedRunningTime="2025-05-09 00:16:18.410659659 +0000 UTC m=+19.485424060" watchObservedRunningTime="2025-05-09 00:16:18.569744076 +0000 UTC m=+19.644508474" May 9 00:16:18.622212 containerd[1903]: time="2025-05-09T00:16:18.622143800Z" level=info msg="shim disconnected" id=85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d namespace=k8s.io May 9 00:16:18.622212 containerd[1903]: time="2025-05-09T00:16:18.622214139Z" level=warning msg="cleaning up after shim disconnected" id=85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d namespace=k8s.io May 9 00:16:18.622586 containerd[1903]: time="2025-05-09T00:16:18.622224782Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:16:19.243457 containerd[1903]: time="2025-05-09T00:16:19.243402892Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:16:19.273287 containerd[1903]: time="2025-05-09T00:16:19.273238403Z" level=info msg="CreateContainer within sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\"" May 9 00:16:19.274034 containerd[1903]: time="2025-05-09T00:16:19.273996631Z" level=info msg="StartContainer for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\"" May 9 00:16:19.309008 systemd[1]: Started cri-containerd-8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f.scope - libcontainer container 8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f. May 9 00:16:19.348400 containerd[1903]: time="2025-05-09T00:16:19.347873550Z" level=info msg="StartContainer for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" returns successfully" May 9 00:16:19.520464 systemd[1]: run-containerd-runc-k8s.io-8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f-runc.z1iKKw.mount: Deactivated successfully. May 9 00:16:19.656632 kubelet[3167]: I0509 00:16:19.656603 3167 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 00:16:19.707927 systemd[1]: Created slice kubepods-burstable-pod907e0a3d_7890_47c9_a4f3_47a6fe077d97.slice - libcontainer container kubepods-burstable-pod907e0a3d_7890_47c9_a4f3_47a6fe077d97.slice. May 9 00:16:19.716911 systemd[1]: Created slice kubepods-burstable-pod7bfaa4b4_6609_4f43_a7c5_89467ccb1638.slice - libcontainer container kubepods-burstable-pod7bfaa4b4_6609_4f43_a7c5_89467ccb1638.slice. May 9 00:16:19.783259 kubelet[3167]: I0509 00:16:19.783141 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/907e0a3d-7890-47c9-a4f3-47a6fe077d97-config-volume\") pod \"coredns-668d6bf9bc-nftzh\" (UID: \"907e0a3d-7890-47c9-a4f3-47a6fe077d97\") " pod="kube-system/coredns-668d6bf9bc-nftzh" May 9 00:16:19.783553 kubelet[3167]: I0509 00:16:19.783461 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bfaa4b4-6609-4f43-a7c5-89467ccb1638-config-volume\") pod \"coredns-668d6bf9bc-6cxnc\" (UID: \"7bfaa4b4-6609-4f43-a7c5-89467ccb1638\") " pod="kube-system/coredns-668d6bf9bc-6cxnc" May 9 00:16:19.783684 kubelet[3167]: I0509 00:16:19.783647 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7cg\" (UniqueName: \"kubernetes.io/projected/907e0a3d-7890-47c9-a4f3-47a6fe077d97-kube-api-access-8p7cg\") pod \"coredns-668d6bf9bc-nftzh\" (UID: \"907e0a3d-7890-47c9-a4f3-47a6fe077d97\") " pod="kube-system/coredns-668d6bf9bc-nftzh" May 9 00:16:19.783684 kubelet[3167]: I0509 00:16:19.783685 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7rq4\" (UniqueName: \"kubernetes.io/projected/7bfaa4b4-6609-4f43-a7c5-89467ccb1638-kube-api-access-c7rq4\") pod \"coredns-668d6bf9bc-6cxnc\" (UID: \"7bfaa4b4-6609-4f43-a7c5-89467ccb1638\") " pod="kube-system/coredns-668d6bf9bc-6cxnc" May 9 00:16:20.016343 containerd[1903]: time="2025-05-09T00:16:20.016289928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nftzh,Uid:907e0a3d-7890-47c9-a4f3-47a6fe077d97,Namespace:kube-system,Attempt:0,}" May 9 00:16:20.024386 containerd[1903]: time="2025-05-09T00:16:20.024344056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cxnc,Uid:7bfaa4b4-6609-4f43-a7c5-89467ccb1638,Namespace:kube-system,Attempt:0,}" May 9 00:16:22.815382 systemd-networkd[1817]: cilium_host: Link UP May 9 00:16:22.816226 (udev-worker)[4255]: Network interface NamePolicy= disabled on kernel command line. May 9 00:16:22.819058 systemd-networkd[1817]: cilium_net: Link UP May 9 00:16:22.819330 systemd-networkd[1817]: cilium_net: Gained carrier May 9 00:16:22.819535 systemd-networkd[1817]: cilium_host: Gained carrier May 9 00:16:22.820236 (udev-worker)[4220]: Network interface NamePolicy= disabled on kernel command line. May 9 00:16:22.858086 systemd-networkd[1817]: cilium_host: Gained IPv6LL May 9 00:16:23.012714 systemd-networkd[1817]: cilium_vxlan: Link UP May 9 00:16:23.012724 systemd-networkd[1817]: cilium_vxlan: Gained carrier May 9 00:16:23.506799 systemd-networkd[1817]: cilium_net: Gained IPv6LL May 9 00:16:23.905940 kernel: NET: Registered PF_ALG protocol family May 9 00:16:24.274092 systemd-networkd[1817]: cilium_vxlan: Gained IPv6LL May 9 00:16:24.669523 systemd-networkd[1817]: lxc_health: Link UP May 9 00:16:24.678120 systemd-networkd[1817]: lxc_health: Gained carrier May 9 00:16:25.155724 systemd-networkd[1817]: lxcaf0c5db4b63f: Link UP May 9 00:16:25.165955 kernel: eth0: renamed from tmp52c19 May 9 00:16:25.174333 systemd-networkd[1817]: lxcaf0c5db4b63f: Gained carrier May 9 00:16:25.176020 systemd-networkd[1817]: lxc5793698d53d1: Link UP May 9 00:16:25.183476 (udev-worker)[4261]: Network interface NamePolicy= disabled on kernel command line. May 9 00:16:25.184812 kernel: eth0: renamed from tmpe35fc May 9 00:16:25.192368 systemd-networkd[1817]: lxc5793698d53d1: Gained carrier May 9 00:16:25.690035 kubelet[3167]: I0509 00:16:25.689895 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wmbg9" podStartSLOduration=11.060078543 podStartE2EDuration="20.689856414s" podCreationTimestamp="2025-05-09 00:16:05 +0000 UTC" firstStartedPulling="2025-05-09 00:16:05.820451053 +0000 UTC m=+6.895215431" lastFinishedPulling="2025-05-09 00:16:15.450228925 +0000 UTC m=+16.524993302" observedRunningTime="2025-05-09 00:16:20.270061883 +0000 UTC m=+21.344826281" watchObservedRunningTime="2025-05-09 00:16:25.689856414 +0000 UTC m=+26.764620816" May 9 00:16:26.322098 systemd-networkd[1817]: lxcaf0c5db4b63f: Gained IPv6LL May 9 00:16:26.322497 systemd-networkd[1817]: lxc5793698d53d1: Gained IPv6LL May 9 00:16:26.578394 systemd-networkd[1817]: lxc_health: Gained IPv6LL May 9 00:16:29.047496 ntpd[1876]: Listen normally on 8 cilium_host 192.168.0.85:123 May 9 00:16:29.048504 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 8 cilium_host 192.168.0.85:123 May 9 00:16:29.048504 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 9 cilium_net [fe80::e02f:2cff:fe37:39%4]:123 May 9 00:16:29.048504 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 10 cilium_host [fe80::30ea:60ff:fe0a:5323%5]:123 May 9 00:16:29.048504 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 11 cilium_vxlan [fe80::e415:30ff:fed0:4767%6]:123 May 9 00:16:29.048504 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 12 lxc_health [fe80::40c9:eaff:fe06:be69%8]:123 May 9 00:16:29.047595 ntpd[1876]: Listen normally on 9 cilium_net [fe80::e02f:2cff:fe37:39%4]:123 May 9 00:16:29.047657 ntpd[1876]: Listen normally on 10 cilium_host [fe80::30ea:60ff:fe0a:5323%5]:123 May 9 00:16:29.047701 ntpd[1876]: Listen normally on 11 cilium_vxlan [fe80::e415:30ff:fed0:4767%6]:123 May 9 00:16:29.047750 ntpd[1876]: Listen normally on 12 lxc_health [fe80::40c9:eaff:fe06:be69%8]:123 May 9 00:16:29.049030 ntpd[1876]: Listen normally on 13 lxcaf0c5db4b63f [fe80::9c0e:e4ff:fe3e:c69e%10]:123 May 9 00:16:29.050029 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 13 lxcaf0c5db4b63f [fe80::9c0e:e4ff:fe3e:c69e%10]:123 May 9 00:16:29.050029 ntpd[1876]: 9 May 00:16:29 ntpd[1876]: Listen normally on 14 lxc5793698d53d1 [fe80::e473:62ff:fec8:b5e3%12]:123 May 9 00:16:29.049124 ntpd[1876]: Listen normally on 14 lxc5793698d53d1 [fe80::e473:62ff:fec8:b5e3%12]:123 May 9 00:16:29.690304 containerd[1903]: time="2025-05-09T00:16:29.690141327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:16:29.690304 containerd[1903]: time="2025-05-09T00:16:29.690238874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:16:29.690304 containerd[1903]: time="2025-05-09T00:16:29.690262589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:29.691210 containerd[1903]: time="2025-05-09T00:16:29.690385065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:29.740438 systemd[1]: run-containerd-runc-k8s.io-e35fc4578181086817edd5db5751433b4820c15971241108237b5bdc00c2f76e-runc.X9ydiE.mount: Deactivated successfully. May 9 00:16:29.760022 systemd[1]: Started cri-containerd-e35fc4578181086817edd5db5751433b4820c15971241108237b5bdc00c2f76e.scope - libcontainer container e35fc4578181086817edd5db5751433b4820c15971241108237b5bdc00c2f76e. May 9 00:16:29.784694 containerd[1903]: time="2025-05-09T00:16:29.783690836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:16:29.784694 containerd[1903]: time="2025-05-09T00:16:29.783746167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:16:29.784694 containerd[1903]: time="2025-05-09T00:16:29.783761070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:29.785690 containerd[1903]: time="2025-05-09T00:16:29.785618210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:16:29.825040 systemd[1]: Started cri-containerd-52c19784a7d5ed61f5f3faf2c2c502dd0a1f8c5ff820c070e02af632a214ec73.scope - libcontainer container 52c19784a7d5ed61f5f3faf2c2c502dd0a1f8c5ff820c070e02af632a214ec73. May 9 00:16:29.897900 containerd[1903]: time="2025-05-09T00:16:29.897848072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cxnc,Uid:7bfaa4b4-6609-4f43-a7c5-89467ccb1638,Namespace:kube-system,Attempt:0,} returns sandbox id \"e35fc4578181086817edd5db5751433b4820c15971241108237b5bdc00c2f76e\"" May 9 00:16:29.905024 containerd[1903]: time="2025-05-09T00:16:29.904878345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nftzh,Uid:907e0a3d-7890-47c9-a4f3-47a6fe077d97,Namespace:kube-system,Attempt:0,} returns sandbox id \"52c19784a7d5ed61f5f3faf2c2c502dd0a1f8c5ff820c070e02af632a214ec73\"" May 9 00:16:29.905716 containerd[1903]: time="2025-05-09T00:16:29.905614500Z" level=info msg="CreateContainer within sandbox \"e35fc4578181086817edd5db5751433b4820c15971241108237b5bdc00c2f76e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:16:29.908381 containerd[1903]: time="2025-05-09T00:16:29.908267586Z" level=info msg="CreateContainer within sandbox \"52c19784a7d5ed61f5f3faf2c2c502dd0a1f8c5ff820c070e02af632a214ec73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:16:30.074950 containerd[1903]: time="2025-05-09T00:16:30.073828491Z" level=info msg="CreateContainer within sandbox \"52c19784a7d5ed61f5f3faf2c2c502dd0a1f8c5ff820c070e02af632a214ec73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a85377baa7148bdde63d6f43e701432e7ffeb466c6f42dfbdf639d94eb30bef7\"" May 9 00:16:30.075633 containerd[1903]: time="2025-05-09T00:16:30.075001231Z" level=info msg="StartContainer for \"a85377baa7148bdde63d6f43e701432e7ffeb466c6f42dfbdf639d94eb30bef7\"" May 9 00:16:30.083184 containerd[1903]: time="2025-05-09T00:16:30.083112949Z" level=info msg="CreateContainer within sandbox \"e35fc4578181086817edd5db5751433b4820c15971241108237b5bdc00c2f76e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86c73f5c8156582b771a897b3c4cec4fdf26a08b7103c35b45bab6680b417600\"" May 9 00:16:30.084242 containerd[1903]: time="2025-05-09T00:16:30.084189627Z" level=info msg="StartContainer for \"86c73f5c8156582b771a897b3c4cec4fdf26a08b7103c35b45bab6680b417600\"" May 9 00:16:30.122656 systemd[1]: Started cri-containerd-a85377baa7148bdde63d6f43e701432e7ffeb466c6f42dfbdf639d94eb30bef7.scope - libcontainer container a85377baa7148bdde63d6f43e701432e7ffeb466c6f42dfbdf639d94eb30bef7. May 9 00:16:30.137032 systemd[1]: Started cri-containerd-86c73f5c8156582b771a897b3c4cec4fdf26a08b7103c35b45bab6680b417600.scope - libcontainer container 86c73f5c8156582b771a897b3c4cec4fdf26a08b7103c35b45bab6680b417600. May 9 00:16:30.248753 containerd[1903]: time="2025-05-09T00:16:30.248701771Z" level=info msg="StartContainer for \"86c73f5c8156582b771a897b3c4cec4fdf26a08b7103c35b45bab6680b417600\" returns successfully" May 9 00:16:30.248993 containerd[1903]: time="2025-05-09T00:16:30.248702079Z" level=info msg="StartContainer for \"a85377baa7148bdde63d6f43e701432e7ffeb466c6f42dfbdf639d94eb30bef7\" returns successfully" May 9 00:16:30.306383 kubelet[3167]: I0509 00:16:30.306214 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6cxnc" podStartSLOduration=25.306196962 podStartE2EDuration="25.306196962s" podCreationTimestamp="2025-05-09 00:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:16:30.305124283 +0000 UTC m=+31.379888680" watchObservedRunningTime="2025-05-09 00:16:30.306196962 +0000 UTC m=+31.380961347" May 9 00:16:30.307423 kubelet[3167]: I0509 00:16:30.307251 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nftzh" podStartSLOduration=25.307236348 podStartE2EDuration="25.307236348s" podCreationTimestamp="2025-05-09 00:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:16:30.290262193 +0000 UTC m=+31.365026591" watchObservedRunningTime="2025-05-09 00:16:30.307236348 +0000 UTC m=+31.382000748" May 9 00:16:33.506177 systemd[1]: Started sshd@9-172.31.22.98:22-139.178.68.195:37010.service - OpenSSH per-connection server daemon (139.178.68.195:37010). May 9 00:16:33.705244 sshd[4785]: Accepted publickey for core from 139.178.68.195 port 37010 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:33.708809 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:33.719808 systemd-logind[1885]: New session 10 of user core. May 9 00:16:33.727133 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:16:34.717327 sshd[4791]: Connection closed by 139.178.68.195 port 37010 May 9 00:16:34.718020 sshd-session[4785]: pam_unix(sshd:session): session closed for user core May 9 00:16:34.721547 systemd[1]: sshd@9-172.31.22.98:22-139.178.68.195:37010.service: Deactivated successfully. May 9 00:16:34.723445 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:16:34.725395 systemd-logind[1885]: Session 10 logged out. Waiting for processes to exit. May 9 00:16:34.727678 systemd-logind[1885]: Removed session 10. May 9 00:16:39.753136 systemd[1]: Started sshd@10-172.31.22.98:22-139.178.68.195:59422.service - OpenSSH per-connection server daemon (139.178.68.195:59422). May 9 00:16:39.928147 sshd[4809]: Accepted publickey for core from 139.178.68.195 port 59422 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:39.929685 sshd-session[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:39.935770 systemd-logind[1885]: New session 11 of user core. May 9 00:16:39.942138 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:16:40.169024 sshd[4811]: Connection closed by 139.178.68.195 port 59422 May 9 00:16:40.170494 sshd-session[4809]: pam_unix(sshd:session): session closed for user core May 9 00:16:40.174418 systemd-logind[1885]: Session 11 logged out. Waiting for processes to exit. May 9 00:16:40.175629 systemd[1]: sshd@10-172.31.22.98:22-139.178.68.195:59422.service: Deactivated successfully. May 9 00:16:40.177555 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:16:40.178825 systemd-logind[1885]: Removed session 11. May 9 00:16:45.207195 systemd[1]: Started sshd@11-172.31.22.98:22-139.178.68.195:36728.service - OpenSSH per-connection server daemon (139.178.68.195:36728). May 9 00:16:45.384702 sshd[4829]: Accepted publickey for core from 139.178.68.195 port 36728 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:45.385319 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:45.391062 systemd-logind[1885]: New session 12 of user core. May 9 00:16:45.397009 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:16:45.591073 sshd[4831]: Connection closed by 139.178.68.195 port 36728 May 9 00:16:45.592757 sshd-session[4829]: pam_unix(sshd:session): session closed for user core May 9 00:16:45.597099 systemd-logind[1885]: Session 12 logged out. Waiting for processes to exit. May 9 00:16:45.597812 systemd[1]: sshd@11-172.31.22.98:22-139.178.68.195:36728.service: Deactivated successfully. May 9 00:16:45.599914 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:16:45.601230 systemd-logind[1885]: Removed session 12. May 9 00:16:50.624917 systemd[1]: Started sshd@12-172.31.22.98:22-139.178.68.195:36744.service - OpenSSH per-connection server daemon (139.178.68.195:36744). May 9 00:16:50.812700 sshd[4843]: Accepted publickey for core from 139.178.68.195 port 36744 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:50.814107 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:50.820529 systemd-logind[1885]: New session 13 of user core. May 9 00:16:50.831039 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:16:51.037436 sshd[4845]: Connection closed by 139.178.68.195 port 36744 May 9 00:16:51.038866 sshd-session[4843]: pam_unix(sshd:session): session closed for user core May 9 00:16:51.042680 systemd-logind[1885]: Session 13 logged out. Waiting for processes to exit. May 9 00:16:51.043611 systemd[1]: sshd@12-172.31.22.98:22-139.178.68.195:36744.service: Deactivated successfully. May 9 00:16:51.046324 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:16:51.047769 systemd-logind[1885]: Removed session 13. May 9 00:16:51.069864 systemd[1]: Started sshd@13-172.31.22.98:22-139.178.68.195:36756.service - OpenSSH per-connection server daemon (139.178.68.195:36756). May 9 00:16:51.242035 sshd[4857]: Accepted publickey for core from 139.178.68.195 port 36756 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:51.242766 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:51.249644 systemd-logind[1885]: New session 14 of user core. May 9 00:16:51.257017 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:16:51.497689 sshd[4859]: Connection closed by 139.178.68.195 port 36756 May 9 00:16:51.497866 sshd-session[4857]: pam_unix(sshd:session): session closed for user core May 9 00:16:51.502779 systemd-logind[1885]: Session 14 logged out. Waiting for processes to exit. May 9 00:16:51.505416 systemd[1]: sshd@13-172.31.22.98:22-139.178.68.195:36756.service: Deactivated successfully. May 9 00:16:51.509222 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:16:51.510826 systemd-logind[1885]: Removed session 14. May 9 00:16:51.536180 systemd[1]: Started sshd@14-172.31.22.98:22-139.178.68.195:36768.service - OpenSSH per-connection server daemon (139.178.68.195:36768). May 9 00:16:51.701440 sshd[4868]: Accepted publickey for core from 139.178.68.195 port 36768 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:51.703280 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:51.707770 systemd-logind[1885]: New session 15 of user core. May 9 00:16:51.710969 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:16:51.929312 sshd[4870]: Connection closed by 139.178.68.195 port 36768 May 9 00:16:51.929806 sshd-session[4868]: pam_unix(sshd:session): session closed for user core May 9 00:16:51.945162 systemd-logind[1885]: Session 15 logged out. Waiting for processes to exit. May 9 00:16:51.945765 systemd[1]: sshd@14-172.31.22.98:22-139.178.68.195:36768.service: Deactivated successfully. May 9 00:16:51.947575 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:16:51.952587 systemd-logind[1885]: Removed session 15. May 9 00:16:56.968121 systemd[1]: Started sshd@15-172.31.22.98:22-139.178.68.195:43020.service - OpenSSH per-connection server daemon (139.178.68.195:43020). May 9 00:16:57.146371 sshd[4882]: Accepted publickey for core from 139.178.68.195 port 43020 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:16:57.148047 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:16:57.154047 systemd-logind[1885]: New session 16 of user core. May 9 00:16:57.165071 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:16:57.369275 sshd[4884]: Connection closed by 139.178.68.195 port 43020 May 9 00:16:57.370860 sshd-session[4882]: pam_unix(sshd:session): session closed for user core May 9 00:16:57.374223 systemd[1]: sshd@15-172.31.22.98:22-139.178.68.195:43020.service: Deactivated successfully. May 9 00:16:57.376308 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:16:57.377151 systemd-logind[1885]: Session 16 logged out. Waiting for processes to exit. May 9 00:16:57.378861 systemd-logind[1885]: Removed session 16. May 9 00:17:02.428583 systemd[1]: Started sshd@16-172.31.22.98:22-139.178.68.195:43032.service - OpenSSH per-connection server daemon (139.178.68.195:43032). May 9 00:17:02.736397 sshd[4897]: Accepted publickey for core from 139.178.68.195 port 43032 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:02.737202 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:02.769880 systemd-logind[1885]: New session 17 of user core. May 9 00:17:02.783934 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:17:03.095114 sshd[4899]: Connection closed by 139.178.68.195 port 43032 May 9 00:17:03.096342 sshd-session[4897]: pam_unix(sshd:session): session closed for user core May 9 00:17:03.100904 systemd[1]: sshd@16-172.31.22.98:22-139.178.68.195:43032.service: Deactivated successfully. May 9 00:17:03.104801 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:17:03.106498 systemd-logind[1885]: Session 17 logged out. Waiting for processes to exit. May 9 00:17:03.108605 systemd-logind[1885]: Removed session 17. May 9 00:17:03.133229 systemd[1]: Started sshd@17-172.31.22.98:22-139.178.68.195:43044.service - OpenSSH per-connection server daemon (139.178.68.195:43044). May 9 00:17:03.372637 sshd[4910]: Accepted publickey for core from 139.178.68.195 port 43044 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:03.376631 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:03.389276 systemd-logind[1885]: New session 18 of user core. May 9 00:17:03.397396 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:17:07.587856 sshd[4912]: Connection closed by 139.178.68.195 port 43044 May 9 00:17:07.611957 sshd-session[4910]: pam_unix(sshd:session): session closed for user core May 9 00:17:07.617626 systemd[1]: sshd@17-172.31.22.98:22-139.178.68.195:43044.service: Deactivated successfully. May 9 00:17:07.619396 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:17:07.620649 systemd-logind[1885]: Session 18 logged out. Waiting for processes to exit. May 9 00:17:07.627302 systemd[1]: Started sshd@18-172.31.22.98:22-139.178.68.195:53656.service - OpenSSH per-connection server daemon (139.178.68.195:53656). May 9 00:17:07.629160 systemd-logind[1885]: Removed session 18. May 9 00:17:07.821123 sshd[4922]: Accepted publickey for core from 139.178.68.195 port 53656 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:07.823874 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:07.829691 systemd-logind[1885]: New session 19 of user core. May 9 00:17:07.834032 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:17:09.214676 sshd[4924]: Connection closed by 139.178.68.195 port 53656 May 9 00:17:09.215629 sshd-session[4922]: pam_unix(sshd:session): session closed for user core May 9 00:17:09.233583 systemd[1]: sshd@18-172.31.22.98:22-139.178.68.195:53656.service: Deactivated successfully. May 9 00:17:09.242263 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:17:09.245914 systemd-logind[1885]: Session 19 logged out. Waiting for processes to exit. May 9 00:17:09.268921 systemd[1]: Started sshd@19-172.31.22.98:22-139.178.68.195:53666.service - OpenSSH per-connection server daemon (139.178.68.195:53666). May 9 00:17:09.272509 systemd-logind[1885]: Removed session 19. May 9 00:17:09.441482 sshd[4942]: Accepted publickey for core from 139.178.68.195 port 53666 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:09.443024 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:09.448305 systemd-logind[1885]: New session 20 of user core. May 9 00:17:09.455039 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:17:09.811236 sshd[4944]: Connection closed by 139.178.68.195 port 53666 May 9 00:17:09.811867 sshd-session[4942]: pam_unix(sshd:session): session closed for user core May 9 00:17:09.815799 systemd-logind[1885]: Session 20 logged out. Waiting for processes to exit. May 9 00:17:09.816229 systemd[1]: sshd@19-172.31.22.98:22-139.178.68.195:53666.service: Deactivated successfully. May 9 00:17:09.818294 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:17:09.819706 systemd-logind[1885]: Removed session 20. May 9 00:17:09.845153 systemd[1]: Started sshd@20-172.31.22.98:22-139.178.68.195:53668.service - OpenSSH per-connection server daemon (139.178.68.195:53668). May 9 00:17:10.008253 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 53668 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:10.014628 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:10.031185 systemd-logind[1885]: New session 21 of user core. May 9 00:17:10.042032 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:17:10.227365 sshd[4955]: Connection closed by 139.178.68.195 port 53668 May 9 00:17:10.228864 sshd-session[4953]: pam_unix(sshd:session): session closed for user core May 9 00:17:10.231705 systemd[1]: sshd@20-172.31.22.98:22-139.178.68.195:53668.service: Deactivated successfully. May 9 00:17:10.234134 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:17:10.238173 systemd-logind[1885]: Session 21 logged out. Waiting for processes to exit. May 9 00:17:10.239965 systemd-logind[1885]: Removed session 21. May 9 00:17:15.265089 systemd[1]: Started sshd@21-172.31.22.98:22-139.178.68.195:59058.service - OpenSSH per-connection server daemon (139.178.68.195:59058). May 9 00:17:15.432076 sshd[4968]: Accepted publickey for core from 139.178.68.195 port 59058 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:15.433633 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:15.438687 systemd-logind[1885]: New session 22 of user core. May 9 00:17:15.444991 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:17:15.628549 sshd[4970]: Connection closed by 139.178.68.195 port 59058 May 9 00:17:15.629167 sshd-session[4968]: pam_unix(sshd:session): session closed for user core May 9 00:17:15.633114 systemd[1]: sshd@21-172.31.22.98:22-139.178.68.195:59058.service: Deactivated successfully. May 9 00:17:15.636056 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:17:15.636916 systemd-logind[1885]: Session 22 logged out. Waiting for processes to exit. May 9 00:17:15.638271 systemd-logind[1885]: Removed session 22. May 9 00:17:20.661281 systemd[1]: Started sshd@22-172.31.22.98:22-139.178.68.195:59066.service - OpenSSH per-connection server daemon (139.178.68.195:59066). May 9 00:17:20.828833 sshd[4981]: Accepted publickey for core from 139.178.68.195 port 59066 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:20.829755 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.838935 systemd-logind[1885]: New session 23 of user core. May 9 00:17:20.842993 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:17:21.050609 sshd[4983]: Connection closed by 139.178.68.195 port 59066 May 9 00:17:21.051523 sshd-session[4981]: pam_unix(sshd:session): session closed for user core May 9 00:17:21.054100 systemd[1]: sshd@22-172.31.22.98:22-139.178.68.195:59066.service: Deactivated successfully. May 9 00:17:21.056682 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:17:21.057542 systemd-logind[1885]: Session 23 logged out. Waiting for processes to exit. May 9 00:17:21.059042 systemd-logind[1885]: Removed session 23. May 9 00:17:26.082948 systemd[1]: Started sshd@23-172.31.22.98:22-139.178.68.195:53058.service - OpenSSH per-connection server daemon (139.178.68.195:53058). May 9 00:17:26.269901 sshd[4995]: Accepted publickey for core from 139.178.68.195 port 53058 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:26.270967 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:26.276575 systemd-logind[1885]: New session 24 of user core. May 9 00:17:26.283007 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:17:26.491067 sshd[4997]: Connection closed by 139.178.68.195 port 53058 May 9 00:17:26.492629 sshd-session[4995]: pam_unix(sshd:session): session closed for user core May 9 00:17:26.497996 systemd[1]: sshd@23-172.31.22.98:22-139.178.68.195:53058.service: Deactivated successfully. May 9 00:17:26.500687 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:17:26.502948 systemd-logind[1885]: Session 24 logged out. Waiting for processes to exit. May 9 00:17:26.504510 systemd-logind[1885]: Removed session 24. May 9 00:17:26.522732 systemd[1]: Started sshd@24-172.31.22.98:22-139.178.68.195:53074.service - OpenSSH per-connection server daemon (139.178.68.195:53074). May 9 00:17:26.694719 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 53074 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:26.696074 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:26.700539 systemd-logind[1885]: New session 25 of user core. May 9 00:17:26.708012 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:17:28.856272 containerd[1903]: time="2025-05-09T00:17:28.856225797Z" level=info msg="StopContainer for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" with timeout 30 (s)" May 9 00:17:28.861549 containerd[1903]: time="2025-05-09T00:17:28.861333993Z" level=info msg="Stop container \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" with signal terminated" May 9 00:17:28.890972 systemd[1]: cri-containerd-7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd.scope: Deactivated successfully. May 9 00:17:28.917302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd-rootfs.mount: Deactivated successfully. May 9 00:17:28.921011 containerd[1903]: time="2025-05-09T00:17:28.920953885Z" level=info msg="shim disconnected" id=7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd namespace=k8s.io May 9 00:17:28.921255 containerd[1903]: time="2025-05-09T00:17:28.921215369Z" level=warning msg="cleaning up after shim disconnected" id=7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd namespace=k8s.io May 9 00:17:28.921255 containerd[1903]: time="2025-05-09T00:17:28.921236148Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:28.927148 containerd[1903]: time="2025-05-09T00:17:28.927082129Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:17:28.938860 containerd[1903]: time="2025-05-09T00:17:28.938714951Z" level=info msg="StopContainer for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" with timeout 2 (s)" May 9 00:17:28.939245 containerd[1903]: time="2025-05-09T00:17:28.939216123Z" level=info msg="Stop container \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" with signal terminated" May 9 00:17:28.950657 systemd-networkd[1817]: lxc_health: Link DOWN May 9 00:17:28.950667 systemd-networkd[1817]: lxc_health: Lost carrier May 9 00:17:28.957759 containerd[1903]: time="2025-05-09T00:17:28.957236045Z" level=info msg="StopContainer for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" returns successfully" May 9 00:17:28.976867 systemd[1]: cri-containerd-8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f.scope: Deactivated successfully. May 9 00:17:28.977150 systemd[1]: cri-containerd-8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f.scope: Consumed 8.002s CPU time. May 9 00:17:28.991579 containerd[1903]: time="2025-05-09T00:17:28.991483746Z" level=info msg="StopPodSandbox for \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\"" May 9 00:17:28.993995 containerd[1903]: time="2025-05-09T00:17:28.993780638Z" level=info msg="Container to stop \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:17:28.997084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4-shm.mount: Deactivated successfully. May 9 00:17:29.013395 systemd[1]: cri-containerd-b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4.scope: Deactivated successfully. May 9 00:17:29.023502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f-rootfs.mount: Deactivated successfully. May 9 00:17:29.060615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4-rootfs.mount: Deactivated successfully. May 9 00:17:29.081516 containerd[1903]: time="2025-05-09T00:17:29.081435284Z" level=info msg="shim disconnected" id=8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f namespace=k8s.io May 9 00:17:29.081516 containerd[1903]: time="2025-05-09T00:17:29.081497520Z" level=warning msg="cleaning up after shim disconnected" id=8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f namespace=k8s.io May 9 00:17:29.081516 containerd[1903]: time="2025-05-09T00:17:29.081510632Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:29.088920 containerd[1903]: time="2025-05-09T00:17:29.088673869Z" level=info msg="shim disconnected" id=b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4 namespace=k8s.io May 9 00:17:29.088920 containerd[1903]: time="2025-05-09T00:17:29.088745569Z" level=warning msg="cleaning up after shim disconnected" id=b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4 namespace=k8s.io May 9 00:17:29.088920 containerd[1903]: time="2025-05-09T00:17:29.088756191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:29.111925 containerd[1903]: time="2025-05-09T00:17:29.111775198Z" level=info msg="TearDown network for sandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" successfully" May 9 00:17:29.111925 containerd[1903]: time="2025-05-09T00:17:29.111847203Z" level=info msg="StopPodSandbox for \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" returns successfully" May 9 00:17:29.113969 containerd[1903]: time="2025-05-09T00:17:29.113931258Z" level=info msg="StopContainer for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" returns successfully" May 9 00:17:29.114906 containerd[1903]: time="2025-05-09T00:17:29.114593125Z" level=info msg="StopPodSandbox for \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\"" May 9 00:17:29.114906 containerd[1903]: time="2025-05-09T00:17:29.114640609Z" level=info msg="Container to stop \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:17:29.114906 containerd[1903]: time="2025-05-09T00:17:29.114681845Z" level=info msg="Container to stop \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:17:29.114906 containerd[1903]: time="2025-05-09T00:17:29.114695197Z" level=info msg="Container to stop \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:17:29.114906 containerd[1903]: time="2025-05-09T00:17:29.114709050Z" level=info msg="Container to stop \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:17:29.115518 containerd[1903]: time="2025-05-09T00:17:29.115172263Z" level=info msg="Container to stop \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:17:29.124733 systemd[1]: cri-containerd-77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e.scope: Deactivated successfully. May 9 00:17:29.158432 containerd[1903]: time="2025-05-09T00:17:29.157754917Z" level=info msg="shim disconnected" id=77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e namespace=k8s.io May 9 00:17:29.158432 containerd[1903]: time="2025-05-09T00:17:29.158433179Z" level=warning msg="cleaning up after shim disconnected" id=77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e namespace=k8s.io May 9 00:17:29.158648 containerd[1903]: time="2025-05-09T00:17:29.158447511Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:29.173196 containerd[1903]: time="2025-05-09T00:17:29.173144297Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:17:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:17:29.174429 containerd[1903]: time="2025-05-09T00:17:29.174220934Z" level=info msg="TearDown network for sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" successfully" May 9 00:17:29.174429 containerd[1903]: time="2025-05-09T00:17:29.174252146Z" level=info msg="StopPodSandbox for \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" returns successfully" May 9 00:17:29.198549 kubelet[3167]: E0509 00:17:29.177294 3167 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:17:29.279473 kubelet[3167]: I0509 00:17:29.279407 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-run\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279473 kubelet[3167]: I0509 00:17:29.279451 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-net\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279473 kubelet[3167]: I0509 00:17:29.279473 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-kernel\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279687 kubelet[3167]: I0509 00:17:29.279500 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hubble-tls\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279687 kubelet[3167]: I0509 00:17:29.279519 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f69f19e-9981-46c3-98c4-f7093b58bb75-cilium-config-path\") pod \"1f69f19e-9981-46c3-98c4-f7093b58bb75\" (UID: \"1f69f19e-9981-46c3-98c4-f7093b58bb75\") " May 9 00:17:29.279687 kubelet[3167]: I0509 00:17:29.279535 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-xtables-lock\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279687 kubelet[3167]: I0509 00:17:29.279549 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cni-path\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279687 kubelet[3167]: I0509 00:17:29.279564 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-cgroup\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279687 kubelet[3167]: I0509 00:17:29.279578 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-lib-modules\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279888 kubelet[3167]: I0509 00:17:29.279592 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-bpf-maps\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279888 kubelet[3167]: I0509 00:17:29.279609 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-277wr\" (UniqueName: \"kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-kube-api-access-277wr\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279888 kubelet[3167]: I0509 00:17:29.279622 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-etc-cni-netd\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279888 kubelet[3167]: I0509 00:17:29.279635 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hostproc\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.279888 kubelet[3167]: I0509 00:17:29.279651 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brh4g\" (UniqueName: \"kubernetes.io/projected/1f69f19e-9981-46c3-98c4-f7093b58bb75-kube-api-access-brh4g\") pod \"1f69f19e-9981-46c3-98c4-f7093b58bb75\" (UID: \"1f69f19e-9981-46c3-98c4-f7093b58bb75\") " May 9 00:17:29.279888 kubelet[3167]: I0509 00:17:29.279670 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-clustermesh-secrets\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.280052 kubelet[3167]: I0509 00:17:29.279686 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-config-path\") pod \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\" (UID: \"6b9eb4bb-3b6d-40c9-ac68-c052729f1705\") " May 9 00:17:29.289823 kubelet[3167]: I0509 00:17:29.287030 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.289823 kubelet[3167]: I0509 00:17:29.287036 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 00:17:29.289823 kubelet[3167]: I0509 00:17:29.289332 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.289823 kubelet[3167]: I0509 00:17:29.289350 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.289823 kubelet[3167]: I0509 00:17:29.289379 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.290144 kubelet[3167]: I0509 00:17:29.289399 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.290144 kubelet[3167]: I0509 00:17:29.289413 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.294130 kubelet[3167]: I0509 00:17:29.292945 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-kube-api-access-277wr" (OuterVolumeSpecName: "kube-api-access-277wr") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "kube-api-access-277wr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:17:29.294130 kubelet[3167]: I0509 00:17:29.293018 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.294130 kubelet[3167]: I0509 00:17:29.293035 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hostproc" (OuterVolumeSpecName: "hostproc") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.295019 kubelet[3167]: I0509 00:17:29.294990 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f69f19e-9981-46c3-98c4-f7093b58bb75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1f69f19e-9981-46c3-98c4-f7093b58bb75" (UID: "1f69f19e-9981-46c3-98c4-f7093b58bb75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 00:17:29.295123 kubelet[3167]: I0509 00:17:29.295112 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.295249 kubelet[3167]: I0509 00:17:29.295238 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cni-path" (OuterVolumeSpecName: "cni-path") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:17:29.295359 kubelet[3167]: I0509 00:17:29.295349 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:17:29.295923 kubelet[3167]: I0509 00:17:29.295896 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f69f19e-9981-46c3-98c4-f7093b58bb75-kube-api-access-brh4g" (OuterVolumeSpecName: "kube-api-access-brh4g") pod "1f69f19e-9981-46c3-98c4-f7093b58bb75" (UID: "1f69f19e-9981-46c3-98c4-f7093b58bb75"). InnerVolumeSpecName "kube-api-access-brh4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:17:29.298557 kubelet[3167]: I0509 00:17:29.298510 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6b9eb4bb-3b6d-40c9-ac68-c052729f1705" (UID: "6b9eb4bb-3b6d-40c9-ac68-c052729f1705"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 9 00:17:29.380945 kubelet[3167]: I0509 00:17:29.380898 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f69f19e-9981-46c3-98c4-f7093b58bb75-cilium-config-path\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.383652 kubelet[3167]: I0509 00:17:29.383594 3167 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cni-path\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.383652 kubelet[3167]: I0509 00:17:29.383633 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-cgroup\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.383652 kubelet[3167]: I0509 00:17:29.383643 3167 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-xtables-lock\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.383652 kubelet[3167]: I0509 00:17:29.383651 3167 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-lib-modules\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.383652 kubelet[3167]: I0509 00:17:29.383659 3167 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-bpf-maps\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.383652 kubelet[3167]: I0509 00:17:29.383667 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-277wr\" (UniqueName: \"kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-kube-api-access-277wr\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383739 3167 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hostproc\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383751 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-brh4g\" (UniqueName: \"kubernetes.io/projected/1f69f19e-9981-46c3-98c4-f7093b58bb75-kube-api-access-brh4g\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383770 3167 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-etc-cni-netd\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383780 3167 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-clustermesh-secrets\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383811 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-config-path\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383819 3167 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-net\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383827 3167 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-host-proc-sys-kernel\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384028 kubelet[3167]: I0509 00:17:29.383835 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-cilium-run\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.384231 kubelet[3167]: I0509 00:17:29.383842 3167 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b9eb4bb-3b6d-40c9-ac68-c052729f1705-hubble-tls\") on node \"ip-172-31-22-98\" DevicePath \"\"" May 9 00:17:29.416286 kubelet[3167]: I0509 00:17:29.415814 3167 scope.go:117] "RemoveContainer" containerID="7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd" May 9 00:17:29.420012 systemd[1]: Removed slice kubepods-besteffort-pod1f69f19e_9981_46c3_98c4_f7093b58bb75.slice - libcontainer container kubepods-besteffort-pod1f69f19e_9981_46c3_98c4_f7093b58bb75.slice. May 9 00:17:29.426152 containerd[1903]: time="2025-05-09T00:17:29.425562639Z" level=info msg="RemoveContainer for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\"" May 9 00:17:29.432778 containerd[1903]: time="2025-05-09T00:17:29.432724107Z" level=info msg="RemoveContainer for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" returns successfully" May 9 00:17:29.434506 kubelet[3167]: I0509 00:17:29.434286 3167 scope.go:117] "RemoveContainer" containerID="7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd" May 9 00:17:29.435151 containerd[1903]: time="2025-05-09T00:17:29.435056582Z" level=error msg="ContainerStatus for \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\": not found" May 9 00:17:29.435458 kubelet[3167]: E0509 00:17:29.435422 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\": not found" containerID="7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd" May 9 00:17:29.447359 systemd[1]: Removed slice kubepods-burstable-pod6b9eb4bb_3b6d_40c9_ac68_c052729f1705.slice - libcontainer container kubepods-burstable-pod6b9eb4bb_3b6d_40c9_ac68_c052729f1705.slice. May 9 00:17:29.447983 systemd[1]: kubepods-burstable-pod6b9eb4bb_3b6d_40c9_ac68_c052729f1705.slice: Consumed 8.103s CPU time. May 9 00:17:29.463396 kubelet[3167]: I0509 00:17:29.435474 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd"} err="failed to get container status \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7964b74481fe5a6723f91cb99200d4bd96bd0c484c3b471ec9e7788536b11ebd\": not found" May 9 00:17:29.463396 kubelet[3167]: I0509 00:17:29.463409 3167 scope.go:117] "RemoveContainer" containerID="8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f" May 9 00:17:29.466764 containerd[1903]: time="2025-05-09T00:17:29.466718291Z" level=info msg="RemoveContainer for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\"" May 9 00:17:29.470814 containerd[1903]: time="2025-05-09T00:17:29.470524821Z" level=info msg="RemoveContainer for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" returns successfully" May 9 00:17:29.471091 kubelet[3167]: I0509 00:17:29.471057 3167 scope.go:117] "RemoveContainer" containerID="85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d" May 9 00:17:29.472060 containerd[1903]: time="2025-05-09T00:17:29.472031818Z" level=info msg="RemoveContainer for \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\"" May 9 00:17:29.477619 containerd[1903]: time="2025-05-09T00:17:29.477018233Z" level=info msg="RemoveContainer for \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\" returns successfully" May 9 00:17:29.477756 kubelet[3167]: I0509 00:17:29.477262 3167 scope.go:117] "RemoveContainer" containerID="675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52" May 9 00:17:29.481756 containerd[1903]: time="2025-05-09T00:17:29.481710257Z" level=info msg="RemoveContainer for \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\"" May 9 00:17:29.485182 containerd[1903]: time="2025-05-09T00:17:29.485143139Z" level=info msg="RemoveContainer for \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\" returns successfully" May 9 00:17:29.485430 kubelet[3167]: I0509 00:17:29.485372 3167 scope.go:117] "RemoveContainer" containerID="80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751" May 9 00:17:29.486649 containerd[1903]: time="2025-05-09T00:17:29.486608314Z" level=info msg="RemoveContainer for \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\"" May 9 00:17:29.491498 containerd[1903]: time="2025-05-09T00:17:29.490363290Z" level=info msg="RemoveContainer for \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\" returns successfully" May 9 00:17:29.491750 kubelet[3167]: I0509 00:17:29.491716 3167 scope.go:117] "RemoveContainer" containerID="670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18" May 9 00:17:29.497463 containerd[1903]: time="2025-05-09T00:17:29.496536475Z" level=info msg="RemoveContainer for \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\"" May 9 00:17:29.502722 containerd[1903]: time="2025-05-09T00:17:29.502498042Z" level=info msg="RemoveContainer for \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\" returns successfully" May 9 00:17:29.503302 kubelet[3167]: I0509 00:17:29.502934 3167 scope.go:117] "RemoveContainer" containerID="8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f" May 9 00:17:29.504835 containerd[1903]: time="2025-05-09T00:17:29.503639522Z" level=error msg="ContainerStatus for \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\": not found" May 9 00:17:29.504989 kubelet[3167]: E0509 00:17:29.504633 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\": not found" containerID="8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f" May 9 00:17:29.504989 kubelet[3167]: I0509 00:17:29.504666 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f"} err="failed to get container status \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f5441740e8c4f2d56851bc4eddfef59d78b2a43be7cc8631ea8a9f182f28d6f\": not found" May 9 00:17:29.504989 kubelet[3167]: I0509 00:17:29.504694 3167 scope.go:117] "RemoveContainer" containerID="85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d" May 9 00:17:29.505538 containerd[1903]: time="2025-05-09T00:17:29.505477887Z" level=error msg="ContainerStatus for \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\": not found" May 9 00:17:29.505926 kubelet[3167]: E0509 00:17:29.505883 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\": not found" containerID="85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d" May 9 00:17:29.506093 kubelet[3167]: I0509 00:17:29.506051 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d"} err="failed to get container status \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"85ce5e7f84a32018a6df47c3fe4c6294c2999c66c1fc13798cd1fd1eb545fc9d\": not found" May 9 00:17:29.506195 kubelet[3167]: I0509 00:17:29.506183 3167 scope.go:117] "RemoveContainer" containerID="675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52" May 9 00:17:29.506576 containerd[1903]: time="2025-05-09T00:17:29.506502141Z" level=error msg="ContainerStatus for \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\": not found" May 9 00:17:29.506727 kubelet[3167]: E0509 00:17:29.506706 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\": not found" containerID="675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52" May 9 00:17:29.506894 kubelet[3167]: I0509 00:17:29.506873 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52"} err="failed to get container status \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\": rpc error: code = NotFound desc = an error occurred when try to find container \"675365a0cf93cff5de89e8315107ca85b184de82f1edef62fc72617dd0323c52\": not found" May 9 00:17:29.506987 kubelet[3167]: I0509 00:17:29.506965 3167 scope.go:117] "RemoveContainer" containerID="80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751" May 9 00:17:29.507212 containerd[1903]: time="2025-05-09T00:17:29.507182974Z" level=error msg="ContainerStatus for \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\": not found" May 9 00:17:29.507352 kubelet[3167]: E0509 00:17:29.507333 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\": not found" containerID="80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751" May 9 00:17:29.507433 kubelet[3167]: I0509 00:17:29.507362 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751"} err="failed to get container status \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\": rpc error: code = NotFound desc = an error occurred when try to find container \"80659a58384ab8cd4ad482c738c6d8e23eb1cdd29fb99998b8e1525288218751\": not found" May 9 00:17:29.507433 kubelet[3167]: I0509 00:17:29.507382 3167 scope.go:117] "RemoveContainer" containerID="670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18" May 9 00:17:29.507596 containerd[1903]: time="2025-05-09T00:17:29.507559608Z" level=error msg="ContainerStatus for \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\": not found" May 9 00:17:29.507703 kubelet[3167]: E0509 00:17:29.507675 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\": not found" containerID="670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18" May 9 00:17:29.507774 kubelet[3167]: I0509 00:17:29.507703 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18"} err="failed to get container status \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\": rpc error: code = NotFound desc = an error occurred when try to find container \"670dfc9d6903321ba8c04e3035e0f3d94c6097316bfc32cfc12631dbd730df18\": not found" May 9 00:17:29.876951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e-rootfs.mount: Deactivated successfully. May 9 00:17:29.877072 systemd[1]: var-lib-kubelet-pods-1f69f19e\x2d9981\x2d46c3\x2d98c4\x2df7093b58bb75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbrh4g.mount: Deactivated successfully. May 9 00:17:29.877181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e-shm.mount: Deactivated successfully. May 9 00:17:29.877270 systemd[1]: var-lib-kubelet-pods-6b9eb4bb\x2d3b6d\x2d40c9\x2dac68\x2dc052729f1705-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d277wr.mount: Deactivated successfully. May 9 00:17:29.877345 systemd[1]: var-lib-kubelet-pods-6b9eb4bb\x2d3b6d\x2d40c9\x2dac68\x2dc052729f1705-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:17:29.877400 systemd[1]: var-lib-kubelet-pods-6b9eb4bb\x2d3b6d\x2d40c9\x2dac68\x2dc052729f1705-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:17:30.783205 sshd[5009]: Connection closed by 139.178.68.195 port 53074 May 9 00:17:30.784281 sshd-session[5007]: pam_unix(sshd:session): session closed for user core May 9 00:17:30.787568 systemd[1]: sshd@24-172.31.22.98:22-139.178.68.195:53074.service: Deactivated successfully. May 9 00:17:30.789533 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:17:30.791271 systemd-logind[1885]: Session 25 logged out. Waiting for processes to exit. May 9 00:17:30.793110 systemd-logind[1885]: Removed session 25. May 9 00:17:30.816110 systemd[1]: Started sshd@25-172.31.22.98:22-139.178.68.195:53076.service - OpenSSH per-connection server daemon (139.178.68.195:53076). May 9 00:17:30.827547 kubelet[3167]: I0509 00:17:30.827505 3167 setters.go:602] "Node became not ready" node="ip-172-31-22-98" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T00:17:30Z","lastTransitionTime":"2025-05-09T00:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 00:17:31.008692 sshd[5165]: Accepted publickey for core from 139.178.68.195 port 53076 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:31.010122 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:31.016052 systemd-logind[1885]: New session 26 of user core. May 9 00:17:31.024049 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 00:17:31.047386 ntpd[1876]: Deleting interface #12 lxc_health, fe80::40c9:eaff:fe06:be69%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs May 9 00:17:31.048035 ntpd[1876]: 9 May 00:17:31 ntpd[1876]: Deleting interface #12 lxc_health, fe80::40c9:eaff:fe06:be69%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs May 9 00:17:31.065234 kubelet[3167]: I0509 00:17:31.065198 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f69f19e-9981-46c3-98c4-f7093b58bb75" path="/var/lib/kubelet/pods/1f69f19e-9981-46c3-98c4-f7093b58bb75/volumes" May 9 00:17:31.065755 kubelet[3167]: I0509 00:17:31.065729 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9eb4bb-3b6d-40c9-ac68-c052729f1705" path="/var/lib/kubelet/pods/6b9eb4bb-3b6d-40c9-ac68-c052729f1705/volumes" May 9 00:17:32.142037 sshd[5167]: Connection closed by 139.178.68.195 port 53076 May 9 00:17:32.142710 sshd-session[5165]: pam_unix(sshd:session): session closed for user core May 9 00:17:32.146514 systemd-logind[1885]: Session 26 logged out. Waiting for processes to exit. May 9 00:17:32.147857 systemd[1]: sshd@25-172.31.22.98:22-139.178.68.195:53076.service: Deactivated successfully. May 9 00:17:32.153224 systemd[1]: session-26.scope: Deactivated successfully. May 9 00:17:32.158762 systemd-logind[1885]: Removed session 26. May 9 00:17:32.182178 systemd[1]: Started sshd@26-172.31.22.98:22-139.178.68.195:53078.service - OpenSSH per-connection server daemon (139.178.68.195:53078). May 9 00:17:32.201919 kubelet[3167]: I0509 00:17:32.201876 3167 memory_manager.go:355] "RemoveStaleState removing state" podUID="1f69f19e-9981-46c3-98c4-f7093b58bb75" containerName="cilium-operator" May 9 00:17:32.201919 kubelet[3167]: I0509 00:17:32.201911 3167 memory_manager.go:355] "RemoveStaleState removing state" podUID="6b9eb4bb-3b6d-40c9-ac68-c052729f1705" containerName="cilium-agent" May 9 00:17:32.241483 kubelet[3167]: I0509 00:17:32.241444 3167 status_manager.go:890] "Failed to get status for pod" podUID="f51edaf2-999e-4860-a84c-9680736007b0" pod="kube-system/cilium-kzqgp" err="pods \"cilium-kzqgp\" is forbidden: User \"system:node:ip-172-31-22-98\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-22-98' and this object" May 9 00:17:32.249077 systemd[1]: Created slice kubepods-burstable-podf51edaf2_999e_4860_a84c_9680736007b0.slice - libcontainer container kubepods-burstable-podf51edaf2_999e_4860_a84c_9680736007b0.slice. May 9 00:17:32.373917 sshd[5177]: Accepted publickey for core from 139.178.68.195 port 53078 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:32.375816 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:32.382141 systemd-logind[1885]: New session 27 of user core. May 9 00:17:32.388048 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 00:17:32.407528 kubelet[3167]: I0509 00:17:32.407055 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-cilium-run\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407528 kubelet[3167]: I0509 00:17:32.407142 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-host-proc-sys-net\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407528 kubelet[3167]: I0509 00:17:32.407160 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ghr7\" (UniqueName: \"kubernetes.io/projected/f51edaf2-999e-4860-a84c-9680736007b0-kube-api-access-5ghr7\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407528 kubelet[3167]: I0509 00:17:32.407178 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-host-proc-sys-kernel\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407528 kubelet[3167]: I0509 00:17:32.407193 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f51edaf2-999e-4860-a84c-9680736007b0-clustermesh-secrets\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407741 kubelet[3167]: I0509 00:17:32.407211 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-bpf-maps\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407741 kubelet[3167]: I0509 00:17:32.407226 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f51edaf2-999e-4860-a84c-9680736007b0-cilium-config-path\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407741 kubelet[3167]: I0509 00:17:32.407241 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-cilium-cgroup\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407741 kubelet[3167]: I0509 00:17:32.407256 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-xtables-lock\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407741 kubelet[3167]: I0509 00:17:32.407283 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f51edaf2-999e-4860-a84c-9680736007b0-cilium-ipsec-secrets\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407741 kubelet[3167]: I0509 00:17:32.407304 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f51edaf2-999e-4860-a84c-9680736007b0-hubble-tls\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407921 kubelet[3167]: I0509 00:17:32.407326 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-etc-cni-netd\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407921 kubelet[3167]: I0509 00:17:32.407342 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-hostproc\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407921 kubelet[3167]: I0509 00:17:32.407356 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-cni-path\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.407921 kubelet[3167]: I0509 00:17:32.407369 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f51edaf2-999e-4860-a84c-9680736007b0-lib-modules\") pod \"cilium-kzqgp\" (UID: \"f51edaf2-999e-4860-a84c-9680736007b0\") " pod="kube-system/cilium-kzqgp" May 9 00:17:32.503775 sshd[5179]: Connection closed by 139.178.68.195 port 53078 May 9 00:17:32.505280 sshd-session[5177]: pam_unix(sshd:session): session closed for user core May 9 00:17:32.509565 systemd[1]: sshd@26-172.31.22.98:22-139.178.68.195:53078.service: Deactivated successfully. May 9 00:17:32.542216 systemd[1]: session-27.scope: Deactivated successfully. May 9 00:17:32.548022 systemd-logind[1885]: Session 27 logged out. Waiting for processes to exit. May 9 00:17:32.556080 containerd[1903]: time="2025-05-09T00:17:32.556036932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzqgp,Uid:f51edaf2-999e-4860-a84c-9680736007b0,Namespace:kube-system,Attempt:0,}" May 9 00:17:32.572764 systemd[1]: Started sshd@27-172.31.22.98:22-139.178.68.195:53090.service - OpenSSH per-connection server daemon (139.178.68.195:53090). May 9 00:17:32.575968 systemd-logind[1885]: Removed session 27. May 9 00:17:32.610144 containerd[1903]: time="2025-05-09T00:17:32.610037196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:17:32.610284 containerd[1903]: time="2025-05-09T00:17:32.610243010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:17:32.610284 containerd[1903]: time="2025-05-09T00:17:32.610358533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:32.610698 containerd[1903]: time="2025-05-09T00:17:32.610578427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:32.633090 systemd[1]: Started cri-containerd-462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3.scope - libcontainer container 462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3. May 9 00:17:32.662692 containerd[1903]: time="2025-05-09T00:17:32.662575146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzqgp,Uid:f51edaf2-999e-4860-a84c-9680736007b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\"" May 9 00:17:32.667669 containerd[1903]: time="2025-05-09T00:17:32.667621456Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:17:32.682251 containerd[1903]: time="2025-05-09T00:17:32.682186450Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310\"" May 9 00:17:32.684843 containerd[1903]: time="2025-05-09T00:17:32.684242688Z" level=info msg="StartContainer for \"8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310\"" May 9 00:17:32.712100 systemd[1]: Started cri-containerd-8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310.scope - libcontainer container 8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310. May 9 00:17:32.744393 containerd[1903]: time="2025-05-09T00:17:32.744346069Z" level=info msg="StartContainer for \"8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310\" returns successfully" May 9 00:17:32.758863 sshd[5189]: Accepted publickey for core from 139.178.68.195 port 53090 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:17:32.756722 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:32.764763 systemd-logind[1885]: New session 28 of user core. May 9 00:17:32.767955 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 00:17:33.077038 systemd[1]: cri-containerd-8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310.scope: Deactivated successfully. May 9 00:17:33.115137 containerd[1903]: time="2025-05-09T00:17:33.115074720Z" level=info msg="shim disconnected" id=8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310 namespace=k8s.io May 9 00:17:33.115137 containerd[1903]: time="2025-05-09T00:17:33.115132274Z" level=warning msg="cleaning up after shim disconnected" id=8f815fc6417aa657cb49aa10db1ac8969108d0e7f4c62426e8aad916ac6c6310 namespace=k8s.io May 9 00:17:33.115137 containerd[1903]: time="2025-05-09T00:17:33.115143292Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:33.440597 containerd[1903]: time="2025-05-09T00:17:33.440037476Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:17:33.462600 containerd[1903]: time="2025-05-09T00:17:33.462558803Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00\"" May 9 00:17:33.463306 containerd[1903]: time="2025-05-09T00:17:33.463273097Z" level=info msg="StartContainer for \"d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00\"" May 9 00:17:33.496110 systemd[1]: Started cri-containerd-d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00.scope - libcontainer container d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00. May 9 00:17:33.535407 containerd[1903]: time="2025-05-09T00:17:33.535361484Z" level=info msg="StartContainer for \"d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00\" returns successfully" May 9 00:17:33.646576 systemd[1]: cri-containerd-d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00.scope: Deactivated successfully. May 9 00:17:33.672776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00-rootfs.mount: Deactivated successfully. May 9 00:17:33.683956 containerd[1903]: time="2025-05-09T00:17:33.683775784Z" level=info msg="shim disconnected" id=d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00 namespace=k8s.io May 9 00:17:33.683956 containerd[1903]: time="2025-05-09T00:17:33.683950323Z" level=warning msg="cleaning up after shim disconnected" id=d78a202153a0e4275e68a46f24cbf825d5f54525d8f111f6a2fb4e3832ab9d00 namespace=k8s.io May 9 00:17:33.683956 containerd[1903]: time="2025-05-09T00:17:33.683963309Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:34.200521 kubelet[3167]: E0509 00:17:34.200482 3167 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:17:34.448155 containerd[1903]: time="2025-05-09T00:17:34.447638258Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:17:34.474537 containerd[1903]: time="2025-05-09T00:17:34.474415774Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e\"" May 9 00:17:34.475759 containerd[1903]: time="2025-05-09T00:17:34.475723835Z" level=info msg="StartContainer for \"8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e\"" May 9 00:17:34.523074 systemd[1]: Started cri-containerd-8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e.scope - libcontainer container 8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e. May 9 00:17:34.562636 containerd[1903]: time="2025-05-09T00:17:34.562263057Z" level=info msg="StartContainer for \"8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e\" returns successfully" May 9 00:17:34.644591 systemd[1]: cri-containerd-8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e.scope: Deactivated successfully. May 9 00:17:34.671738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e-rootfs.mount: Deactivated successfully. May 9 00:17:34.685099 containerd[1903]: time="2025-05-09T00:17:34.685008213Z" level=info msg="shim disconnected" id=8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e namespace=k8s.io May 9 00:17:34.685099 containerd[1903]: time="2025-05-09T00:17:34.685071983Z" level=warning msg="cleaning up after shim disconnected" id=8c9487e0230de8d941d325f19ed44c1b04779c8a32ee7c4c9fea1dea7b79d79e namespace=k8s.io May 9 00:17:34.685099 containerd[1903]: time="2025-05-09T00:17:34.685085536Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:35.449804 containerd[1903]: time="2025-05-09T00:17:35.449737112Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:17:35.475934 containerd[1903]: time="2025-05-09T00:17:35.475872076Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc\"" May 9 00:17:35.477190 containerd[1903]: time="2025-05-09T00:17:35.476320786Z" level=info msg="StartContainer for \"b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc\"" May 9 00:17:35.514058 systemd[1]: Started cri-containerd-b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc.scope - libcontainer container b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc. May 9 00:17:35.547320 systemd[1]: cri-containerd-b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc.scope: Deactivated successfully. May 9 00:17:35.552745 containerd[1903]: time="2025-05-09T00:17:35.552556891Z" level=info msg="StartContainer for \"b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc\" returns successfully" May 9 00:17:35.576418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc-rootfs.mount: Deactivated successfully. May 9 00:17:35.584080 containerd[1903]: time="2025-05-09T00:17:35.584019836Z" level=info msg="shim disconnected" id=b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc namespace=k8s.io May 9 00:17:35.584080 containerd[1903]: time="2025-05-09T00:17:35.584070840Z" level=warning msg="cleaning up after shim disconnected" id=b91fc7832d4f97209f45a08f19344b313d29955c4466c983d14a7aa7a968aebc namespace=k8s.io May 9 00:17:35.584080 containerd[1903]: time="2025-05-09T00:17:35.584081736Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:17:36.453767 containerd[1903]: time="2025-05-09T00:17:36.453723568Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:17:36.486942 containerd[1903]: time="2025-05-09T00:17:36.486894253Z" level=info msg="CreateContainer within sandbox \"462f5b74d4a0652f711fa04748f3b8470136e59820086b65b3fcff089e8f3cf3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0\"" May 9 00:17:36.487564 containerd[1903]: time="2025-05-09T00:17:36.487519517Z" level=info msg="StartContainer for \"052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0\"" May 9 00:17:36.518047 systemd[1]: Started cri-containerd-052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0.scope - libcontainer container 052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0. May 9 00:17:36.559730 containerd[1903]: time="2025-05-09T00:17:36.559616640Z" level=info msg="StartContainer for \"052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0\" returns successfully" May 9 00:17:36.584321 systemd[1]: run-containerd-runc-k8s.io-052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0-runc.8IUZML.mount: Deactivated successfully. May 9 00:17:38.475163 kubelet[3167]: I0509 00:17:38.475077 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kzqgp" podStartSLOduration=6.475057709 podStartE2EDuration="6.475057709s" podCreationTimestamp="2025-05-09 00:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:17:38.474815527 +0000 UTC m=+99.549579918" watchObservedRunningTime="2025-05-09 00:17:38.475057709 +0000 UTC m=+99.549822106" May 9 00:17:39.925829 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:17:42.275557 systemd[1]: run-containerd-runc-k8s.io-052d36b0d5c6170c8a1a9c77dd98ecf4c5f7acc431e44bb66f01852b7f7dcbc0-runc.79hc3d.mount: Deactivated successfully. May 9 00:17:42.353843 kubelet[3167]: E0509 00:17:42.353749 3167 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51330->127.0.0.1:41079: write tcp 127.0.0.1:51330->127.0.0.1:41079: write: broken pipe May 9 00:17:43.181128 systemd-networkd[1817]: lxc_health: Link UP May 9 00:17:43.189559 systemd-networkd[1817]: lxc_health: Gained carrier May 9 00:17:43.189948 (udev-worker)[6056]: Network interface NamePolicy= disabled on kernel command line. May 9 00:17:44.596563 systemd-networkd[1817]: lxc_health: Gained IPv6LL May 9 00:17:46.907771 kubelet[3167]: E0509 00:17:46.907568 3167 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51348->127.0.0.1:41079: write tcp 127.0.0.1:51348->127.0.0.1:41079: write: broken pipe May 9 00:17:47.047485 ntpd[1876]: Listen normally on 15 lxc_health [fe80::7c70:8eff:fea7:e7d7%14]:123 May 9 00:17:47.048768 ntpd[1876]: 9 May 00:17:47 ntpd[1876]: Listen normally on 15 lxc_health [fe80::7c70:8eff:fea7:e7d7%14]:123 May 9 00:17:49.126698 sshd[5262]: Connection closed by 139.178.68.195 port 53090 May 9 00:17:49.128212 sshd-session[5189]: pam_unix(sshd:session): session closed for user core May 9 00:17:49.135505 systemd[1]: sshd@27-172.31.22.98:22-139.178.68.195:53090.service: Deactivated successfully. May 9 00:17:49.138494 systemd[1]: session-28.scope: Deactivated successfully. May 9 00:17:49.139457 systemd-logind[1885]: Session 28 logged out. Waiting for processes to exit. May 9 00:17:49.140939 systemd-logind[1885]: Removed session 28. May 9 00:17:59.064170 containerd[1903]: time="2025-05-09T00:17:59.063517564Z" level=info msg="StopPodSandbox for \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\"" May 9 00:17:59.064170 containerd[1903]: time="2025-05-09T00:17:59.063633556Z" level=info msg="TearDown network for sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" successfully" May 9 00:17:59.064170 containerd[1903]: time="2025-05-09T00:17:59.063649318Z" level=info msg="StopPodSandbox for \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" returns successfully" May 9 00:17:59.064745 containerd[1903]: time="2025-05-09T00:17:59.064303883Z" level=info msg="RemovePodSandbox for \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\"" May 9 00:17:59.064745 containerd[1903]: time="2025-05-09T00:17:59.064369027Z" level=info msg="Forcibly stopping sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\"" May 9 00:17:59.064745 containerd[1903]: time="2025-05-09T00:17:59.064464322Z" level=info msg="TearDown network for sandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" successfully" May 9 00:17:59.071967 containerd[1903]: time="2025-05-09T00:17:59.071912185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:17:59.072103 containerd[1903]: time="2025-05-09T00:17:59.071998992Z" level=info msg="RemovePodSandbox \"77836ba27b4d3474b85fdb89b0f795d4ad526904cd600ebcae41f6c65d7f520e\" returns successfully" May 9 00:17:59.072558 containerd[1903]: time="2025-05-09T00:17:59.072524654Z" level=info msg="StopPodSandbox for \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\"" May 9 00:17:59.072656 containerd[1903]: time="2025-05-09T00:17:59.072612090Z" level=info msg="TearDown network for sandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" successfully" May 9 00:17:59.072656 containerd[1903]: time="2025-05-09T00:17:59.072623902Z" level=info msg="StopPodSandbox for \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" returns successfully" May 9 00:17:59.073007 containerd[1903]: time="2025-05-09T00:17:59.072978441Z" level=info msg="RemovePodSandbox for \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\"" May 9 00:17:59.073007 containerd[1903]: time="2025-05-09T00:17:59.072999688Z" level=info msg="Forcibly stopping sandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\"" May 9 00:17:59.073092 containerd[1903]: time="2025-05-09T00:17:59.073049442Z" level=info msg="TearDown network for sandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" successfully" May 9 00:17:59.136023 containerd[1903]: time="2025-05-09T00:17:59.135881840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:17:59.136729 containerd[1903]: time="2025-05-09T00:17:59.136398453Z" level=info msg="RemovePodSandbox \"b3a11354a8f108affb9abce8fd73e985bd58e7e71e3bb0377f9c20d0bfe508e4\" returns successfully" May 9 00:18:04.047172 systemd[1]: cri-containerd-f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a.scope: Deactivated successfully. May 9 00:18:04.047482 systemd[1]: cri-containerd-f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a.scope: Consumed 2.814s CPU time, 21.5M memory peak, 0B memory swap peak. May 9 00:18:04.078882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a-rootfs.mount: Deactivated successfully. May 9 00:18:04.117520 containerd[1903]: time="2025-05-09T00:18:04.117450273Z" level=info msg="shim disconnected" id=f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a namespace=k8s.io May 9 00:18:04.117520 containerd[1903]: time="2025-05-09T00:18:04.117504966Z" level=warning msg="cleaning up after shim disconnected" id=f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a namespace=k8s.io May 9 00:18:04.117520 containerd[1903]: time="2025-05-09T00:18:04.117518358Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:18:04.543682 kubelet[3167]: I0509 00:18:04.543640 3167 scope.go:117] "RemoveContainer" containerID="f52eb0c73a2a3c34c2285e246b98e9b2d28a8a4dfd559416e7803d9d339ccd8a" May 9 00:18:04.552456 containerd[1903]: time="2025-05-09T00:18:04.552412643Z" level=info msg="CreateContainer within sandbox \"dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 9 00:18:04.606322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011045410.mount: Deactivated successfully. May 9 00:18:04.623172 containerd[1903]: time="2025-05-09T00:18:04.623114086Z" level=info msg="CreateContainer within sandbox \"dde6e6930183e53b51bd0cf2447ec865c546d988af36e5028335ec1d35397b85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"073336310eb976ed16f37fb869f22e3a2fa5cc7f7e743c881d252c0a21d4c5b9\"" May 9 00:18:04.623723 containerd[1903]: time="2025-05-09T00:18:04.623681470Z" level=info msg="StartContainer for \"073336310eb976ed16f37fb869f22e3a2fa5cc7f7e743c881d252c0a21d4c5b9\"" May 9 00:18:04.656050 systemd[1]: Started cri-containerd-073336310eb976ed16f37fb869f22e3a2fa5cc7f7e743c881d252c0a21d4c5b9.scope - libcontainer container 073336310eb976ed16f37fb869f22e3a2fa5cc7f7e743c881d252c0a21d4c5b9. May 9 00:18:04.716697 containerd[1903]: time="2025-05-09T00:18:04.716645985Z" level=info msg="StartContainer for \"073336310eb976ed16f37fb869f22e3a2fa5cc7f7e743c881d252c0a21d4c5b9\" returns successfully" May 9 00:18:09.089152 systemd[1]: cri-containerd-89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583.scope: Deactivated successfully. May 9 00:18:09.090910 systemd[1]: cri-containerd-89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583.scope: Consumed 1.905s CPU time, 19.5M memory peak, 0B memory swap peak. May 9 00:18:09.116330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583-rootfs.mount: Deactivated successfully. May 9 00:18:09.140049 containerd[1903]: time="2025-05-09T00:18:09.139988229Z" level=info msg="shim disconnected" id=89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583 namespace=k8s.io May 9 00:18:09.140049 containerd[1903]: time="2025-05-09T00:18:09.140041503Z" level=warning msg="cleaning up after shim disconnected" id=89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583 namespace=k8s.io May 9 00:18:09.140049 containerd[1903]: time="2025-05-09T00:18:09.140049832Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:18:09.561817 kubelet[3167]: I0509 00:18:09.561224 3167 scope.go:117] "RemoveContainer" containerID="89133a316d56804da0ffd9b5f6a9f96906b9f04e63f1dc200fc741064ae0f583" May 9 00:18:09.565996 containerd[1903]: time="2025-05-09T00:18:09.565046291Z" level=info msg="CreateContainer within sandbox \"234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 9 00:18:09.591417 containerd[1903]: time="2025-05-09T00:18:09.591360986Z" level=info msg="CreateContainer within sandbox \"234bd1edb338557eda32f99ba1bd68adc9a2218ef35ff29c1cb8fb14e34d3fb8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8fc1b57b04be09701b40e57b050a37c472279ec53d0f2580cf1b967237663dcd\"" May 9 00:18:09.591949 containerd[1903]: time="2025-05-09T00:18:09.591901375Z" level=info msg="StartContainer for \"8fc1b57b04be09701b40e57b050a37c472279ec53d0f2580cf1b967237663dcd\"" May 9 00:18:09.632026 systemd[1]: Started cri-containerd-8fc1b57b04be09701b40e57b050a37c472279ec53d0f2580cf1b967237663dcd.scope - libcontainer container 8fc1b57b04be09701b40e57b050a37c472279ec53d0f2580cf1b967237663dcd. May 9 00:18:09.679864 containerd[1903]: time="2025-05-09T00:18:09.679815343Z" level=info msg="StartContainer for \"8fc1b57b04be09701b40e57b050a37c472279ec53d0f2580cf1b967237663dcd\" returns successfully" May 9 00:18:11.847951 kubelet[3167]: E0509 00:18:11.847871 3167 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-98?timeout=10s\": context deadline exceeded" May 9 00:18:21.848217 kubelet[3167]: E0509 00:18:21.848112 3167 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-98?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"