May 9 00:11:42.931512 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:21:52 -00 2025 May 9 00:11:42.931550 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:11:42.931567 kernel: BIOS-provided physical RAM map: May 9 00:11:42.931577 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:11:42.931587 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 9 00:11:42.931597 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 9 00:11:42.931609 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 9 00:11:42.931620 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 9 00:11:42.931631 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 9 00:11:42.931644 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 9 00:11:42.931654 kernel: NX (Execute Disable) protection: active May 9 00:11:42.931665 kernel: APIC: Static calls initialized May 9 00:11:42.931676 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 9 00:11:42.931687 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 9 00:11:42.931701 kernel: extended physical RAM map: May 9 00:11:42.931715 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:11:42.931726 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 9 00:11:42.931739 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 9 00:11:42.931750 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 9 00:11:42.931762 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 9 00:11:42.931774 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 9 00:11:42.931786 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 9 00:11:42.931797 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 9 00:11:42.931808 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 9 00:11:42.931819 kernel: efi: EFI v2.7 by EDK II May 9 00:11:42.931831 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 9 00:11:42.931846 kernel: secureboot: Secure boot disabled May 9 00:11:42.931858 kernel: SMBIOS 2.7 present. May 9 00:11:42.931869 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 9 00:11:42.931880 kernel: Hypervisor detected: KVM May 9 00:11:42.931894 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:11:42.931908 kernel: kvm-clock: using sched offset of 3601236962 cycles May 9 00:11:42.931922 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:11:42.931937 kernel: tsc: Detected 2499.996 MHz processor May 9 00:11:42.931951 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:11:42.931965 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:11:42.931981 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 9 00:11:42.931996 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:11:42.932010 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:11:42.932024 kernel: Using GB pages for direct mapping May 9 00:11:42.932044 kernel: ACPI: Early table checksum verification disabled May 9 00:11:42.932059 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 9 00:11:42.936416 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 9 00:11:42.936450 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 00:11:42.936465 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 9 00:11:42.936479 kernel: ACPI: FACS 0x00000000789D0000 000040 May 9 00:11:42.936493 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 9 00:11:42.936519 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 00:11:42.936533 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 00:11:42.936547 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 9 00:11:42.936564 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 9 00:11:42.936578 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 9 00:11:42.936593 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 9 00:11:42.936607 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 9 00:11:42.936621 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 9 00:11:42.936636 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 9 00:11:42.936650 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 9 00:11:42.936664 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 9 00:11:42.936679 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 9 00:11:42.936696 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 9 00:11:42.936710 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 9 00:11:42.936724 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 9 00:11:42.936738 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 9 00:11:42.936752 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 9 00:11:42.936767 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 9 00:11:42.936781 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 9 00:11:42.936795 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 9 00:11:42.936810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 9 00:11:42.936827 kernel: NUMA: Initialized distance table, cnt=1 May 9 00:11:42.936842 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 9 00:11:42.936857 kernel: Zone ranges: May 9 00:11:42.936872 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:11:42.936886 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 9 00:11:42.936900 kernel: Normal empty May 9 00:11:42.936915 kernel: Movable zone start for each node May 9 00:11:42.936929 kernel: Early memory node ranges May 9 00:11:42.936944 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:11:42.936958 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 9 00:11:42.936975 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 9 00:11:42.936989 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 9 00:11:42.937004 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:11:42.937019 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:11:42.937033 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:11:42.937048 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 9 00:11:42.937062 kernel: ACPI: PM-Timer IO Port: 0xb008 May 9 00:11:42.937102 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:11:42.937117 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 9 00:11:42.937135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:11:42.937150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:11:42.937164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:11:42.937178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:11:42.937193 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:11:42.937207 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:11:42.937222 kernel: TSC deadline timer available May 9 00:11:42.937236 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 9 00:11:42.937251 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:11:42.937269 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 9 00:11:42.937284 kernel: Booting paravirtualized kernel on KVM May 9 00:11:42.937298 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:11:42.937314 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 9 00:11:42.937335 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 9 00:11:42.937354 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 9 00:11:42.937368 kernel: pcpu-alloc: [0] 0 1 May 9 00:11:42.937382 kernel: kvm-guest: PV spinlocks enabled May 9 00:11:42.937397 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:11:42.937415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:11:42.937430 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:11:42.937444 kernel: random: crng init done May 9 00:11:42.937457 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:11:42.937471 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 9 00:11:42.937485 kernel: Fallback order for Node 0: 0 May 9 00:11:42.937501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 9 00:11:42.937516 kernel: Policy zone: DMA32 May 9 00:11:42.937532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:11:42.937546 kernel: Memory: 1874584K/2037804K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 162964K reserved, 0K cma-reserved) May 9 00:11:42.937559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 00:11:42.937572 kernel: Kernel/User page tables isolation: enabled May 9 00:11:42.937588 kernel: ftrace: allocating 37946 entries in 149 pages May 9 00:11:42.937629 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:11:42.937647 kernel: Dynamic Preempt: voluntary May 9 00:11:42.937664 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:11:42.937681 kernel: rcu: RCU event tracing is enabled. May 9 00:11:42.937696 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 00:11:42.937712 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:11:42.937727 kernel: Rude variant of Tasks RCU enabled. May 9 00:11:42.937745 kernel: Tracing variant of Tasks RCU enabled. May 9 00:11:42.937761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:11:42.937776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 00:11:42.937791 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 9 00:11:42.937806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:11:42.937826 kernel: Console: colour dummy device 80x25 May 9 00:11:42.937843 kernel: printk: console [tty0] enabled May 9 00:11:42.937859 kernel: printk: console [ttyS0] enabled May 9 00:11:42.937875 kernel: ACPI: Core revision 20230628 May 9 00:11:42.937893 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 9 00:11:42.937909 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:11:42.937924 kernel: x2apic enabled May 9 00:11:42.937940 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:11:42.937956 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 9 00:11:42.937978 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 9 00:11:42.937995 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 9 00:11:42.938010 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 9 00:11:42.938026 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:11:42.938042 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:11:42.938057 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:11:42.939133 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 9 00:11:42.939158 kernel: RETBleed: Vulnerable May 9 00:11:42.939174 kernel: Speculative Store Bypass: Vulnerable May 9 00:11:42.939190 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 9 00:11:42.939212 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 9 00:11:42.939227 kernel: GDS: Unknown: Dependent on hypervisor status May 9 00:11:42.939242 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:11:42.939258 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:11:42.939273 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:11:42.939289 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 9 00:11:42.939305 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 9 00:11:42.939321 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 9 00:11:42.939337 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 9 00:11:42.939353 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 9 00:11:42.939368 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 9 00:11:42.939388 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:11:42.939404 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 9 00:11:42.939419 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 9 00:11:42.939435 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 9 00:11:42.939452 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 9 00:11:42.939467 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 9 00:11:42.939482 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 9 00:11:42.939495 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 9 00:11:42.939509 kernel: Freeing SMP alternatives memory: 32K May 9 00:11:42.939521 kernel: pid_max: default: 32768 minimum: 301 May 9 00:11:42.939533 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:11:42.939551 kernel: landlock: Up and running. May 9 00:11:42.939565 kernel: SELinux: Initializing. May 9 00:11:42.939578 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 9 00:11:42.939592 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 9 00:11:42.939607 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 9 00:11:42.939621 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:11:42.939634 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:11:42.939649 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:11:42.939663 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 9 00:11:42.939677 kernel: signal: max sigframe size: 3632 May 9 00:11:42.939694 kernel: rcu: Hierarchical SRCU implementation. May 9 00:11:42.939709 kernel: rcu: Max phase no-delay instances is 400. May 9 00:11:42.939723 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 9 00:11:42.939737 kernel: smp: Bringing up secondary CPUs ... May 9 00:11:42.939751 kernel: smpboot: x86: Booting SMP configuration: May 9 00:11:42.939765 kernel: .... node #0, CPUs: #1 May 9 00:11:42.939779 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 9 00:11:42.939795 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 9 00:11:42.939812 kernel: smp: Brought up 1 node, 2 CPUs May 9 00:11:42.939826 kernel: smpboot: Max logical packages: 1 May 9 00:11:42.939840 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 9 00:11:42.939853 kernel: devtmpfs: initialized May 9 00:11:42.939866 kernel: x86/mm: Memory block size: 128MB May 9 00:11:42.939880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 9 00:11:42.939894 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:11:42.939908 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 00:11:42.939922 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:11:42.939940 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:11:42.939954 kernel: audit: initializing netlink subsys (disabled) May 9 00:11:42.939967 kernel: audit: type=2000 audit(1746749503.072:1): state=initialized audit_enabled=0 res=1 May 9 00:11:42.939981 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:11:42.939994 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:11:42.940008 kernel: cpuidle: using governor menu May 9 00:11:42.940021 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:11:42.940035 kernel: dca service started, version 1.12.1 May 9 00:11:42.940049 kernel: PCI: Using configuration type 1 for base access May 9 00:11:42.940063 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:11:42.940091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:11:42.940106 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:11:42.940119 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:11:42.940133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:11:42.940147 kernel: ACPI: Added _OSI(Module Device) May 9 00:11:42.940160 kernel: ACPI: Added _OSI(Processor Device) May 9 00:11:42.940174 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:11:42.940188 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:11:42.940202 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 9 00:11:42.940218 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:11:42.940232 kernel: ACPI: Interpreter enabled May 9 00:11:42.940245 kernel: ACPI: PM: (supports S0 S5) May 9 00:11:42.940259 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:11:42.940273 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:11:42.940286 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:11:42.940300 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 9 00:11:42.940314 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:11:42.940550 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 9 00:11:42.940693 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 9 00:11:42.940820 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 9 00:11:42.940837 kernel: acpiphp: Slot [3] registered May 9 00:11:42.940851 kernel: acpiphp: Slot [4] registered May 9 00:11:42.940865 kernel: acpiphp: Slot [5] registered May 9 00:11:42.940879 kernel: acpiphp: Slot [6] registered May 9 00:11:42.940892 kernel: acpiphp: Slot [7] registered May 9 00:11:42.940909 kernel: acpiphp: Slot [8] registered May 9 00:11:42.940923 kernel: acpiphp: Slot [9] registered May 9 00:11:42.940937 kernel: acpiphp: Slot [10] registered May 9 00:11:42.940951 kernel: acpiphp: Slot [11] registered May 9 00:11:42.940965 kernel: acpiphp: Slot [12] registered May 9 00:11:42.940979 kernel: acpiphp: Slot [13] registered May 9 00:11:42.940992 kernel: acpiphp: Slot [14] registered May 9 00:11:42.941006 kernel: acpiphp: Slot [15] registered May 9 00:11:42.941020 kernel: acpiphp: Slot [16] registered May 9 00:11:42.941037 kernel: acpiphp: Slot [17] registered May 9 00:11:42.941052 kernel: acpiphp: Slot [18] registered May 9 00:11:42.941065 kernel: acpiphp: Slot [19] registered May 9 00:11:42.944127 kernel: acpiphp: Slot [20] registered May 9 00:11:42.944145 kernel: acpiphp: Slot [21] registered May 9 00:11:42.944161 kernel: acpiphp: Slot [22] registered May 9 00:11:42.944176 kernel: acpiphp: Slot [23] registered May 9 00:11:42.944192 kernel: acpiphp: Slot [24] registered May 9 00:11:42.944208 kernel: acpiphp: Slot [25] registered May 9 00:11:42.944223 kernel: acpiphp: Slot [26] registered May 9 00:11:42.944244 kernel: acpiphp: Slot [27] registered May 9 00:11:42.944259 kernel: acpiphp: Slot [28] registered May 9 00:11:42.944275 kernel: acpiphp: Slot [29] registered May 9 00:11:42.944291 kernel: acpiphp: Slot [30] registered May 9 00:11:42.944307 kernel: acpiphp: Slot [31] registered May 9 00:11:42.944322 kernel: PCI host bridge to bus 0000:00 May 9 00:11:42.944623 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:11:42.944771 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:11:42.944902 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:11:42.945023 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 9 00:11:42.945162 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 9 00:11:42.945282 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:11:42.945438 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 9 00:11:42.945585 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 9 00:11:42.945727 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 9 00:11:42.945867 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 9 00:11:42.946001 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 9 00:11:42.947233 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 9 00:11:42.947396 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 9 00:11:42.947534 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 9 00:11:42.947674 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 9 00:11:42.947814 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 9 00:11:42.947957 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 9 00:11:42.949139 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 9 00:11:42.949338 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:11:42.949502 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 9 00:11:42.949663 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:11:42.949836 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 00:11:42.949990 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 9 00:11:42.950212 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 00:11:42.950365 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 9 00:11:42.950387 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:11:42.950404 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:11:42.950420 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:11:42.950436 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:11:42.950452 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 9 00:11:42.950475 kernel: iommu: Default domain type: Translated May 9 00:11:42.950491 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:11:42.950507 kernel: efivars: Registered efivars operations May 9 00:11:42.950523 kernel: PCI: Using ACPI for IRQ routing May 9 00:11:42.950539 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:11:42.950555 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 9 00:11:42.950571 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 9 00:11:42.950586 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 9 00:11:42.950720 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 9 00:11:42.950860 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 9 00:11:42.950994 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:11:42.951014 kernel: vgaarb: loaded May 9 00:11:42.951031 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 9 00:11:42.951047 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 9 00:11:42.951063 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:11:42.951105 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:11:42.951122 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:11:42.951142 kernel: pnp: PnP ACPI init May 9 00:11:42.951158 kernel: pnp: PnP ACPI: found 5 devices May 9 00:11:42.951174 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:11:42.951190 kernel: NET: Registered PF_INET protocol family May 9 00:11:42.951206 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:11:42.951223 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 9 00:11:42.951239 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:11:42.951255 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 9 00:11:42.951272 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 9 00:11:42.951291 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 9 00:11:42.951307 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 9 00:11:42.951323 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 9 00:11:42.951339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:11:42.951355 kernel: NET: Registered PF_XDP protocol family May 9 00:11:42.951486 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:11:42.951609 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:11:42.951729 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:11:42.951848 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 9 00:11:42.951974 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 9 00:11:42.952127 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 9 00:11:42.952148 kernel: PCI: CLS 0 bytes, default 64 May 9 00:11:42.952164 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 9 00:11:42.952180 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 9 00:11:42.952197 kernel: clocksource: Switched to clocksource tsc May 9 00:11:42.952213 kernel: Initialise system trusted keyrings May 9 00:11:42.952229 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 9 00:11:42.952250 kernel: Key type asymmetric registered May 9 00:11:42.952265 kernel: Asymmetric key parser 'x509' registered May 9 00:11:42.952281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:11:42.952297 kernel: io scheduler mq-deadline registered May 9 00:11:42.952313 kernel: io scheduler kyber registered May 9 00:11:42.952329 kernel: io scheduler bfq registered May 9 00:11:42.952345 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:11:42.952362 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:11:42.952378 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:11:42.952397 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:11:42.952413 kernel: i8042: Warning: Keylock active May 9 00:11:42.952429 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:11:42.952445 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:11:42.952598 kernel: rtc_cmos 00:00: RTC can wake from S4 May 9 00:11:42.952728 kernel: rtc_cmos 00:00: registered as rtc0 May 9 00:11:42.952854 kernel: rtc_cmos 00:00: setting system clock to 2025-05-09T00:11:42 UTC (1746749502) May 9 00:11:42.952979 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 9 00:11:42.953002 kernel: intel_pstate: CPU model not supported May 9 00:11:42.953018 kernel: efifb: probing for efifb May 9 00:11:42.953034 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 9 00:11:42.953051 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 9 00:11:42.953100 kernel: efifb: scrolling: redraw May 9 00:11:42.953121 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 00:11:42.953138 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:11:42.953156 kernel: fb0: EFI VGA frame buffer device May 9 00:11:42.953173 kernel: pstore: Using crash dump compression: deflate May 9 00:11:42.953192 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:11:42.953210 kernel: NET: Registered PF_INET6 protocol family May 9 00:11:42.953226 kernel: Segment Routing with IPv6 May 9 00:11:42.953243 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:11:42.953260 kernel: NET: Registered PF_PACKET protocol family May 9 00:11:42.953276 kernel: Key type dns_resolver registered May 9 00:11:42.953293 kernel: IPI shorthand broadcast: enabled May 9 00:11:42.953310 kernel: sched_clock: Marking stable (451002221, 130236703)->(667148567, -85909643) May 9 00:11:42.953326 kernel: registered taskstats version 1 May 9 00:11:42.953347 kernel: Loading compiled-in X.509 certificates May 9 00:11:42.953364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: eadd5f695247828f81e51397e7264f8efd327b51' May 9 00:11:42.953380 kernel: Key type .fscrypt registered May 9 00:11:42.953397 kernel: Key type fscrypt-provisioning registered May 9 00:11:42.953414 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:11:42.953431 kernel: ima: Allocated hash algorithm: sha1 May 9 00:11:42.953448 kernel: ima: No architecture policies found May 9 00:11:42.953464 kernel: clk: Disabling unused clocks May 9 00:11:42.953481 kernel: Freeing unused kernel image (initmem) memory: 43000K May 9 00:11:42.953501 kernel: Write protecting the kernel read-only data: 36864k May 9 00:11:42.953518 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 9 00:11:42.953535 kernel: Run /init as init process May 9 00:11:42.953551 kernel: with arguments: May 9 00:11:42.953568 kernel: /init May 9 00:11:42.953584 kernel: with environment: May 9 00:11:42.953601 kernel: HOME=/ May 9 00:11:42.953617 kernel: TERM=linux May 9 00:11:42.953634 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:11:42.953658 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:11:42.953678 systemd[1]: Detected virtualization amazon. May 9 00:11:42.953696 systemd[1]: Detected architecture x86-64. May 9 00:11:42.953713 systemd[1]: Running in initrd. May 9 00:11:42.953731 systemd[1]: No hostname configured, using default hostname. May 9 00:11:42.953750 systemd[1]: Hostname set to . May 9 00:11:42.953769 systemd[1]: Initializing machine ID from VM UUID. May 9 00:11:42.953786 systemd[1]: Queued start job for default target initrd.target. May 9 00:11:42.953804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:11:42.953821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:11:42.953840 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:11:42.953861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:11:42.953882 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:11:42.953900 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:11:42.953920 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:11:42.953937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:11:42.953955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:11:42.953973 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:11:42.953994 systemd[1]: Reached target paths.target - Path Units. May 9 00:11:42.954012 systemd[1]: Reached target slices.target - Slice Units. May 9 00:11:42.954029 systemd[1]: Reached target swap.target - Swaps. May 9 00:11:42.954048 systemd[1]: Reached target timers.target - Timer Units. May 9 00:11:42.954065 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:11:42.954095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:11:42.954113 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:11:42.954131 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:11:42.954149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:11:42.954170 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:11:42.954188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:11:42.954205 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:11:42.954223 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:11:42.954241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:11:42.954259 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:11:42.954277 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:11:42.954292 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:11:42.954306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:11:42.954324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:42.954338 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:11:42.954395 systemd-journald[179]: Collecting audit messages is disabled. May 9 00:11:42.954437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:11:42.954456 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:11:42.954476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:11:42.954496 systemd-journald[179]: Journal started May 9 00:11:42.954534 systemd-journald[179]: Runtime Journal (/run/log/journal/ec233103ced4a893b3f10e08b922e522) is 4.7M, max 38.2M, 33.4M free. May 9 00:11:42.962712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:42.960888 systemd-modules-load[180]: Inserted module 'overlay' May 9 00:11:42.968726 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:11:42.972728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:11:42.982314 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:11:42.987002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:11:43.007313 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:11:43.007280 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:11:43.019896 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:11:43.019933 kernel: Bridge firewalling registered May 9 00:11:43.013332 systemd-modules-load[180]: Inserted module 'br_netfilter' May 9 00:11:43.018145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:11:43.032344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:11:43.033608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:11:43.036140 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:11:43.038675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:11:43.047344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:11:43.048386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:11:43.052275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:11:43.064904 dracut-cmdline[212]: dracut-dracut-053 May 9 00:11:43.069183 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:11:43.104427 systemd-resolved[215]: Positive Trust Anchors: May 9 00:11:43.104448 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:11:43.104598 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:11:43.112958 systemd-resolved[215]: Defaulting to hostname 'linux'. May 9 00:11:43.116843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:11:43.117563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:11:43.158101 kernel: SCSI subsystem initialized May 9 00:11:43.168096 kernel: Loading iSCSI transport class v2.0-870. May 9 00:11:43.180104 kernel: iscsi: registered transport (tcp) May 9 00:11:43.201378 kernel: iscsi: registered transport (qla4xxx) May 9 00:11:43.201459 kernel: QLogic iSCSI HBA Driver May 9 00:11:43.240495 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:11:43.246273 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:11:43.272177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:11:43.272258 kernel: device-mapper: uevent: version 1.0.3 May 9 00:11:43.273249 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:11:43.316104 kernel: raid6: avx512x4 gen() 18090 MB/s May 9 00:11:43.334094 kernel: raid6: avx512x2 gen() 17324 MB/s May 9 00:11:43.351098 kernel: raid6: avx512x1 gen() 17897 MB/s May 9 00:11:43.368098 kernel: raid6: avx2x4 gen() 17797 MB/s May 9 00:11:43.385098 kernel: raid6: avx2x2 gen() 17755 MB/s May 9 00:11:43.402322 kernel: raid6: avx2x1 gen() 13564 MB/s May 9 00:11:43.402368 kernel: raid6: using algorithm avx512x4 gen() 18090 MB/s May 9 00:11:43.422120 kernel: raid6: .... xor() 7482 MB/s, rmw enabled May 9 00:11:43.422184 kernel: raid6: using avx512x2 recovery algorithm May 9 00:11:43.443102 kernel: xor: automatically using best checksumming function avx May 9 00:11:43.608102 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:11:43.619246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:11:43.625346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:11:43.640685 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 9 00:11:43.645826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:11:43.655338 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:11:43.673787 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 9 00:11:43.705388 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:11:43.715338 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:11:43.767596 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:11:43.774313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:11:43.802575 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:11:43.805294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:11:43.807206 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:11:43.807745 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:11:43.815677 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:11:43.841951 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:11:43.875154 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:11:43.890428 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 00:11:43.890723 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 00:11:43.897752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:11:43.898014 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:11:43.900632 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:11:43.906373 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 9 00:11:43.906687 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:11:43.902390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:11:43.902714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:43.920985 kernel: AES CTR mode by8 optimization enabled May 9 00:11:43.921021 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c7:4a:12:2c:81 May 9 00:11:43.905412 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:43.913985 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. May 9 00:11:43.918991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:43.932719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:11:43.932850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:43.937204 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 00:11:43.937445 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 9 00:11:43.943606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:43.960111 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 00:11:43.968112 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:11:43.968186 kernel: GPT:9289727 != 16777215 May 9 00:11:43.968215 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:11:43.968234 kernel: GPT:9289727 != 16777215 May 9 00:11:43.968252 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:11:43.968271 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:11:43.978813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:43.986384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:11:44.007819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:11:44.061110 kernel: BTRFS: device fsid cea98156-267a-4592-a459-5921031522cf devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (449) May 9 00:11:44.072170 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (460) May 9 00:11:44.120366 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 00:11:44.140006 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 00:11:44.151306 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 00:11:44.151872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 00:11:44.164512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 00:11:44.169268 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:11:44.183978 disk-uuid[629]: Primary Header is updated. May 9 00:11:44.183978 disk-uuid[629]: Secondary Entries is updated. May 9 00:11:44.183978 disk-uuid[629]: Secondary Header is updated. May 9 00:11:44.190152 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:11:44.204107 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:11:45.202346 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 00:11:45.202419 disk-uuid[630]: The operation has completed successfully. May 9 00:11:45.334984 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:11:45.335126 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:11:45.357310 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:11:45.361944 sh[888]: Success May 9 00:11:45.384090 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 9 00:11:45.483639 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:11:45.492661 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:11:45.494849 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:11:45.533147 kernel: BTRFS info (device dm-0): first mount of filesystem cea98156-267a-4592-a459-5921031522cf May 9 00:11:45.533213 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:11:45.533228 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:11:45.535042 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:11:45.537357 kernel: BTRFS info (device dm-0): using free space tree May 9 00:11:45.564116 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 00:11:45.578546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:11:45.579622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:11:45.585268 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:11:45.588056 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:11:45.617094 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:11:45.617172 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 9 00:11:45.617195 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 00:11:45.625116 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 00:11:45.637585 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:11:45.639695 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:11:45.646234 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:11:45.654360 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:11:45.698493 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:11:45.706273 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:11:45.741923 systemd-networkd[1080]: lo: Link UP May 9 00:11:45.741937 systemd-networkd[1080]: lo: Gained carrier May 9 00:11:45.743673 systemd-networkd[1080]: Enumeration completed May 9 00:11:45.744453 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:11:45.745315 systemd[1]: Reached target network.target - Network. May 9 00:11:45.746012 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:45.746017 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:11:45.754775 systemd-networkd[1080]: eth0: Link UP May 9 00:11:45.754781 systemd-networkd[1080]: eth0: Gained carrier May 9 00:11:45.754799 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:45.769181 systemd-networkd[1080]: eth0: DHCPv4 address 172.31.17.17/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 00:11:45.962958 ignition[1011]: Ignition 2.20.0 May 9 00:11:45.962972 ignition[1011]: Stage: fetch-offline May 9 00:11:45.963231 ignition[1011]: no configs at "/usr/lib/ignition/base.d" May 9 00:11:45.963244 ignition[1011]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:45.965223 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:11:45.963770 ignition[1011]: Ignition finished successfully May 9 00:11:45.972254 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 00:11:45.986713 ignition[1091]: Ignition 2.20.0 May 9 00:11:45.986727 ignition[1091]: Stage: fetch May 9 00:11:45.987194 ignition[1091]: no configs at "/usr/lib/ignition/base.d" May 9 00:11:45.987209 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:45.987328 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:45.997474 ignition[1091]: PUT result: OK May 9 00:11:46.000307 ignition[1091]: parsed url from cmdline: "" May 9 00:11:46.000316 ignition[1091]: no config URL provided May 9 00:11:46.000324 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:11:46.000335 ignition[1091]: no config at "/usr/lib/ignition/user.ign" May 9 00:11:46.000354 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:46.001916 ignition[1091]: PUT result: OK May 9 00:11:46.001994 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 00:11:46.002869 ignition[1091]: GET result: OK May 9 00:11:46.002993 ignition[1091]: parsing config with SHA512: 3a9d8671e80b1bf24c5f4929cc2884e5444d1e127945c053fc16d4abd78faaa413c97e795cccfad818cd3cb2b5448c0f230e6011f94a72a3cd0aca95dbe0db52 May 9 00:11:46.008520 unknown[1091]: fetched base config from "system" May 9 00:11:46.008531 unknown[1091]: fetched base config from "system" May 9 00:11:46.009508 ignition[1091]: fetch: fetch complete May 9 00:11:46.008538 unknown[1091]: fetched user config from "aws" May 9 00:11:46.009521 ignition[1091]: fetch: fetch passed May 9 00:11:46.009598 ignition[1091]: Ignition finished successfully May 9 00:11:46.011816 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 00:11:46.016332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:11:46.033329 ignition[1097]: Ignition 2.20.0 May 9 00:11:46.033343 ignition[1097]: Stage: kargs May 9 00:11:46.033782 ignition[1097]: no configs at "/usr/lib/ignition/base.d" May 9 00:11:46.033796 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:46.033930 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:46.034995 ignition[1097]: PUT result: OK May 9 00:11:46.037880 ignition[1097]: kargs: kargs passed May 9 00:11:46.037941 ignition[1097]: Ignition finished successfully May 9 00:11:46.039148 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:11:46.045310 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:11:46.059568 ignition[1103]: Ignition 2.20.0 May 9 00:11:46.059582 ignition[1103]: Stage: disks May 9 00:11:46.060037 ignition[1103]: no configs at "/usr/lib/ignition/base.d" May 9 00:11:46.060053 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:46.060208 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:46.061381 ignition[1103]: PUT result: OK May 9 00:11:46.064282 ignition[1103]: disks: disks passed May 9 00:11:46.064347 ignition[1103]: Ignition finished successfully May 9 00:11:46.065394 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:11:46.066331 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:11:46.067102 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:11:46.067500 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:11:46.068059 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:11:46.068843 systemd[1]: Reached target basic.target - Basic System. May 9 00:11:46.079383 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:11:46.124841 systemd-fsck[1111]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:11:46.129641 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:11:46.135233 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:11:46.254229 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 61492938-2ced-4ec2-b593-fc96fa0fefcc r/w with ordered data mode. Quota mode: none. May 9 00:11:46.257700 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:11:46.258936 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:11:46.277208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:11:46.280237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:11:46.281449 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:11:46.282207 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:11:46.282236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:11:46.292736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:11:46.299156 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1130) May 9 00:11:46.299418 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:11:46.305206 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:11:46.305268 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 9 00:11:46.305291 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 00:11:46.321118 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 00:11:46.323080 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:11:46.587039 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:11:46.592342 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory May 9 00:11:46.597348 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:11:46.602834 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:11:46.816047 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:11:46.828269 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:11:46.833403 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:11:46.846264 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:11:46.848137 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:11:46.882838 ignition[1247]: INFO : Ignition 2.20.0 May 9 00:11:46.882838 ignition[1247]: INFO : Stage: mount May 9 00:11:46.884832 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:11:46.884832 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:46.884832 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:46.889381 ignition[1247]: INFO : PUT result: OK May 9 00:11:46.889381 ignition[1247]: INFO : mount: mount passed May 9 00:11:46.889381 ignition[1247]: INFO : Ignition finished successfully May 9 00:11:46.892656 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:11:46.901235 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:11:46.905821 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:11:46.921404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:11:46.941330 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1260) May 9 00:11:46.941387 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:11:46.943099 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 9 00:11:46.945823 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 00:11:46.951103 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 00:11:46.954092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:11:46.980477 ignition[1276]: INFO : Ignition 2.20.0 May 9 00:11:46.980477 ignition[1276]: INFO : Stage: files May 9 00:11:46.981926 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:11:46.981926 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:46.981926 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:46.982989 ignition[1276]: INFO : PUT result: OK May 9 00:11:46.985714 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping May 9 00:11:46.987700 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:11:46.987700 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:11:47.010089 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:11:47.010857 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:11:47.010857 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:11:47.010584 unknown[1276]: wrote ssh authorized keys file for user: core May 9 00:11:47.012755 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:11:47.013382 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:11:47.114650 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:11:47.260202 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:11:47.261455 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:11:47.261455 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:11:47.381255 systemd-networkd[1080]: eth0: Gained IPv6LL May 9 00:11:47.762212 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:11:48.051437 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:11:48.051437 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:48.053596 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 00:11:48.333944 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:11:49.095792 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:49.095792 ignition[1276]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 00:11:49.098955 ignition[1276]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:11:49.100267 ignition[1276]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:11:49.100267 ignition[1276]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 00:11:49.100267 ignition[1276]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 9 00:11:49.100267 ignition[1276]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:11:49.100267 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:11:49.100267 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:11:49.100267 ignition[1276]: INFO : files: files passed May 9 00:11:49.100267 ignition[1276]: INFO : Ignition finished successfully May 9 00:11:49.101702 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:11:49.111348 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:11:49.113225 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:11:49.119318 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:11:49.125256 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:11:49.135584 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:11:49.135584 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:11:49.139843 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:11:49.140244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:11:49.142632 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:11:49.150349 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:11:49.186630 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:11:49.186799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:11:49.188092 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:11:49.189393 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:11:49.190298 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:11:49.196299 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:11:49.209937 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:11:49.218365 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:11:49.230994 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:11:49.231701 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:11:49.232887 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:11:49.233819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:11:49.234007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:11:49.235221 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:11:49.236097 systemd[1]: Stopped target basic.target - Basic System. May 9 00:11:49.237104 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:11:49.238702 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:11:49.239181 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:11:49.240128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:11:49.241064 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:11:49.241852 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:11:49.243125 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:11:49.243869 systemd[1]: Stopped target swap.target - Swaps. May 9 00:11:49.244760 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:11:49.244947 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:11:49.245983 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:11:49.246815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:11:49.247503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:11:49.248282 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:11:49.248864 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:11:49.249086 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:11:49.250496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:11:49.250690 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:11:49.251436 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:11:49.251592 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:11:49.261125 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:11:49.264460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:11:49.268177 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:11:49.268437 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:11:49.275343 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:11:49.275537 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:11:49.282973 ignition[1330]: INFO : Ignition 2.20.0 May 9 00:11:49.282973 ignition[1330]: INFO : Stage: umount May 9 00:11:49.286940 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:11:49.286940 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 00:11:49.286940 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 00:11:49.284171 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:11:49.290189 ignition[1330]: INFO : PUT result: OK May 9 00:11:49.284301 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:11:49.293092 ignition[1330]: INFO : umount: umount passed May 9 00:11:49.293092 ignition[1330]: INFO : Ignition finished successfully May 9 00:11:49.294083 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:11:49.294277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:11:49.295722 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:11:49.295847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:11:49.296471 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:11:49.296549 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:11:49.297065 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 00:11:49.297353 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 00:11:49.298211 systemd[1]: Stopped target network.target - Network. May 9 00:11:49.302159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:11:49.302241 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:11:49.302927 systemd[1]: Stopped target paths.target - Path Units. May 9 00:11:49.303394 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:11:49.305140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:11:49.307760 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:11:49.308216 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:11:49.308693 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:11:49.308755 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:11:49.311214 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:11:49.311271 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:11:49.311690 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:11:49.311763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:11:49.312325 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:11:49.312389 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:11:49.313288 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:11:49.314330 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:11:49.316391 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:11:49.317130 systemd-networkd[1080]: eth0: DHCPv6 lease lost May 9 00:11:49.320013 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:11:49.320180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:11:49.321390 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:11:49.321446 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:11:49.325217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:11:49.325787 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:11:49.325864 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:11:49.326593 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:11:49.332552 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:11:49.332694 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:11:49.339038 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:11:49.339173 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:11:49.340124 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:11:49.340195 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:11:49.341038 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:11:49.341184 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:11:49.342337 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:11:49.342531 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:11:49.349694 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:11:49.349806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:11:49.351139 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:11:49.351192 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:11:49.352399 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:11:49.352467 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:11:49.353821 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:11:49.353888 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:11:49.356470 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:11:49.356696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:11:49.364387 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:11:49.365896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:11:49.365987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:11:49.366969 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:11:49.367033 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:11:49.368226 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:11:49.368283 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:11:49.368891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:11:49.368947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:49.370022 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:11:49.373016 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:11:49.373910 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:11:49.374025 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:11:49.447056 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:11:49.447216 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:11:49.448923 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:11:49.449555 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:11:49.449654 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:11:49.457522 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:11:49.466503 systemd[1]: Switching root. May 9 00:11:49.494973 systemd-journald[179]: Journal stopped May 9 00:11:50.711631 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). May 9 00:11:50.711707 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:11:50.711729 kernel: SELinux: policy capability open_perms=1 May 9 00:11:50.711741 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:11:50.711753 kernel: SELinux: policy capability always_check_network=0 May 9 00:11:50.711764 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:11:50.711780 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:11:50.711792 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:11:50.711803 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:11:50.711820 kernel: audit: type=1403 audit(1746749509.754:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:11:50.711832 systemd[1]: Successfully loaded SELinux policy in 40.841ms. May 9 00:11:50.711850 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.202ms. May 9 00:11:50.711864 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:11:50.711877 systemd[1]: Detected virtualization amazon. May 9 00:11:50.711889 systemd[1]: Detected architecture x86-64. May 9 00:11:50.711904 systemd[1]: Detected first boot. May 9 00:11:50.711917 systemd[1]: Initializing machine ID from VM UUID. May 9 00:11:50.711930 zram_generator::config[1373]: No configuration found. May 9 00:11:50.711947 systemd[1]: Populated /etc with preset unit settings. May 9 00:11:50.711960 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:11:50.711973 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:11:50.711986 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:11:50.717142 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:11:50.717171 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:11:50.717184 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:11:50.717197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:11:50.717210 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:11:50.717223 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:11:50.717235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:11:50.717247 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:11:50.717260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:11:50.717273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:11:50.717290 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:11:50.717302 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:11:50.717315 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:11:50.717327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:11:50.717340 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:11:50.717352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:11:50.717365 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:11:50.717378 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:11:50.717391 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:11:50.717407 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:11:50.717420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:11:50.717433 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:11:50.717445 systemd[1]: Reached target slices.target - Slice Units. May 9 00:11:50.717458 systemd[1]: Reached target swap.target - Swaps. May 9 00:11:50.717471 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:11:50.717483 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:11:50.717499 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:11:50.717511 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:11:50.717523 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:11:50.717536 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:11:50.717549 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:11:50.717561 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:11:50.717573 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:11:50.717586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:50.717598 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:11:50.717612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:11:50.717626 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:11:50.717639 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:11:50.717651 systemd[1]: Reached target machines.target - Containers. May 9 00:11:50.717668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:11:50.717681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:50.717694 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:11:50.717706 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:11:50.717719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:50.717733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:11:50.717745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:50.717758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:11:50.717770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:50.717784 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:11:50.717796 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:11:50.717808 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:11:50.717820 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:11:50.717840 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:11:50.717853 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:11:50.717865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:11:50.717877 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:11:50.717890 kernel: fuse: init (API version 7.39) May 9 00:11:50.717903 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:11:50.717915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:11:50.717928 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:11:50.717940 systemd[1]: Stopped verity-setup.service. May 9 00:11:50.717956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:50.717969 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:11:50.717981 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:11:50.717994 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:11:50.718007 kernel: ACPI: bus type drm_connector registered May 9 00:11:50.718021 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:11:50.718033 kernel: loop: module loaded May 9 00:11:50.718095 systemd-journald[1458]: Collecting audit messages is disabled. May 9 00:11:50.718121 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:11:50.718133 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:11:50.718145 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:11:50.718158 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:11:50.718173 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:11:50.718186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:50.718199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:50.718211 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:11:50.718227 systemd-journald[1458]: Journal started May 9 00:11:50.718253 systemd-journald[1458]: Runtime Journal (/run/log/journal/ec233103ced4a893b3f10e08b922e522) is 4.7M, max 38.2M, 33.4M free. May 9 00:11:50.417676 systemd[1]: Queued start job for default target multi-user.target. May 9 00:11:50.439368 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 00:11:50.720267 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:11:50.439865 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:11:50.725208 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:11:50.722226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:50.722356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:50.722940 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:11:50.723053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:11:50.724111 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:50.724248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:50.724818 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:11:50.725541 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:11:50.736476 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:11:50.743199 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:11:50.749206 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:11:50.749660 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:11:50.749699 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:11:50.751109 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:11:50.761584 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:11:50.763910 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:11:50.765653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:50.772237 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:11:50.775228 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:11:50.777170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:11:50.778306 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:11:50.778741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:11:50.787332 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:11:50.791245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:11:50.794805 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:11:50.796113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:11:50.801220 systemd-journald[1458]: Time spent on flushing to /var/log/journal/ec233103ced4a893b3f10e08b922e522 is 44.650ms for 996 entries. May 9 00:11:50.801220 systemd-journald[1458]: System Journal (/var/log/journal/ec233103ced4a893b3f10e08b922e522) is 8.0M, max 195.6M, 187.6M free. May 9 00:11:50.870779 systemd-journald[1458]: Received client request to flush runtime journal. May 9 00:11:50.870828 kernel: loop0: detected capacity change from 0 to 62848 May 9 00:11:50.797353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:11:50.797971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:11:50.799717 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:11:50.816640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:11:50.848224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:11:50.848883 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:11:50.861467 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:11:50.873764 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:11:50.907240 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:11:50.908296 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:11:50.910994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:11:50.916540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:11:50.924287 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:11:50.935690 udevadm[1517]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:11:50.936403 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. May 9 00:11:50.936421 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. May 9 00:11:50.942887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:11:50.954265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:11:50.954629 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:11:50.990182 kernel: loop1: detected capacity change from 0 to 138184 May 9 00:11:51.011867 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:11:51.023470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:11:51.055121 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. May 9 00:11:51.055150 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. May 9 00:11:51.062853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:11:51.115103 kernel: loop2: detected capacity change from 0 to 210664 May 9 00:11:51.253088 kernel: loop3: detected capacity change from 0 to 140992 May 9 00:11:51.367094 kernel: loop4: detected capacity change from 0 to 62848 May 9 00:11:51.406092 kernel: loop5: detected capacity change from 0 to 138184 May 9 00:11:51.444097 kernel: loop6: detected capacity change from 0 to 210664 May 9 00:11:51.475094 kernel: loop7: detected capacity change from 0 to 140992 May 9 00:11:51.492776 (sd-merge)[1530]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 00:11:51.493878 (sd-merge)[1530]: Merged extensions into '/usr'. May 9 00:11:51.500169 systemd[1]: Reloading requested from client PID 1499 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:11:51.500299 systemd[1]: Reloading... May 9 00:11:51.575137 zram_generator::config[1556]: No configuration found. May 9 00:11:51.593110 ldconfig[1495]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:11:51.718183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:11:51.771946 systemd[1]: Reloading finished in 271 ms. May 9 00:11:51.802450 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:11:51.803251 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:11:51.803920 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:11:51.813300 systemd[1]: Starting ensure-sysext.service... May 9 00:11:51.815875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:11:51.827293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:11:51.833100 systemd[1]: Reloading requested from client PID 1609 ('systemctl') (unit ensure-sysext.service)... May 9 00:11:51.833123 systemd[1]: Reloading... May 9 00:11:51.855276 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:11:51.855793 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:11:51.862533 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:11:51.862991 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. May 9 00:11:51.865133 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. May 9 00:11:51.873031 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:11:51.873046 systemd-tmpfiles[1610]: Skipping /boot May 9 00:11:51.894126 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:11:51.894146 systemd-tmpfiles[1610]: Skipping /boot May 9 00:11:51.897391 systemd-udevd[1611]: Using default interface naming scheme 'v255'. May 9 00:11:51.954095 zram_generator::config[1640]: No configuration found. May 9 00:11:52.054587 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. May 9 00:11:52.164615 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1647) May 9 00:11:52.189111 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 9 00:11:52.225097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:11:52.230475 kernel: ACPI: button: Power Button [PWRF] May 9 00:11:52.230568 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 May 9 00:11:52.235094 kernel: ACPI: button: Sleep Button [SLPF] May 9 00:11:52.294102 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 May 9 00:11:52.288421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:11:52.414998 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:11:52.415749 systemd[1]: Reloading finished in 582 ms. May 9 00:11:52.438237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:11:52.447779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:11:52.477109 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:11:52.507900 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:11:52.513725 systemd[1]: Finished ensure-sysext.service. May 9 00:11:52.526726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 00:11:52.527501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:52.536308 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:11:52.542322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:11:52.543219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:52.545382 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:11:52.551296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:52.554745 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:11:52.559292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:52.567386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:52.568754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:52.578437 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:11:52.594200 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:11:52.600295 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:11:52.624973 lvm[1803]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:11:52.626885 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:11:52.627551 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:11:52.639463 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:11:52.651335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:52.653177 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:52.654563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:52.655119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:52.656967 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:11:52.657797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:11:52.664573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:52.665183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:52.666898 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:52.667794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:52.677487 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:11:52.685669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:11:52.694396 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:11:52.695041 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:11:52.695149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:11:52.702273 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:11:52.705043 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:11:52.709143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:11:52.729445 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:11:52.732524 lvm[1835]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:11:52.736510 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:11:52.738991 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:11:52.743603 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:11:52.754646 augenrules[1846]: No rules May 9 00:11:52.755354 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:11:52.755619 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:11:52.773722 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:11:52.780969 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:11:52.800049 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:11:52.843148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:52.891161 systemd-networkd[1815]: lo: Link UP May 9 00:11:52.891178 systemd-networkd[1815]: lo: Gained carrier May 9 00:11:52.893347 systemd-networkd[1815]: Enumeration completed May 9 00:11:52.893491 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:11:52.894590 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:52.894602 systemd-networkd[1815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:11:52.900333 systemd-networkd[1815]: eth0: Link UP May 9 00:11:52.900540 systemd-networkd[1815]: eth0: Gained carrier May 9 00:11:52.900575 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:52.902964 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:11:52.904715 systemd-resolved[1818]: Positive Trust Anchors: May 9 00:11:52.904731 systemd-resolved[1818]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:11:52.904785 systemd-resolved[1818]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:11:52.910460 systemd-networkd[1815]: eth0: DHCPv4 address 172.31.17.17/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 00:11:52.912435 systemd-resolved[1818]: Defaulting to hostname 'linux'. May 9 00:11:52.915001 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:11:52.915771 systemd[1]: Reached target network.target - Network. May 9 00:11:52.916385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:11:52.917051 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:11:52.918275 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:11:52.919126 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:11:52.920010 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:11:52.920810 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:11:52.921440 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:11:52.921961 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:11:52.921989 systemd[1]: Reached target paths.target - Path Units. May 9 00:11:52.922417 systemd[1]: Reached target timers.target - Timer Units. May 9 00:11:52.925564 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:11:52.927573 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:11:52.933306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:11:52.934496 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:11:52.935055 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:11:52.935502 systemd[1]: Reached target basic.target - Basic System. May 9 00:11:52.935931 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:11:52.935971 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:11:52.937233 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:11:52.941289 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 00:11:52.947437 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:11:52.955231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:11:52.960645 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:11:52.961906 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:11:52.977736 jq[1871]: false May 9 00:11:52.978468 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:11:52.984325 systemd[1]: Started ntpd.service - Network Time Service. May 9 00:11:53.002424 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:11:53.005033 systemd[1]: Starting setup-oem.service - Setup OEM... May 9 00:11:53.014384 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:11:53.017401 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:11:53.028479 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:11:53.031659 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:11:53.032351 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:11:53.034813 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:11:53.045907 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:11:53.051847 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:11:53.052098 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:11:53.115382 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:11:53.115608 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:11:53.130140 ntpd[1874]: ntpd 4.2.8p17@1.4004-o Thu May 8 21:41:51 UTC 2025 (1): Starting May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: ntpd 4.2.8p17@1.4004-o Thu May 8 21:41:51 UTC 2025 (1): Starting May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: ---------------------------------------------------- May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: ntp-4 is maintained by Network Time Foundation, May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: corporation. Support and training for ntp-4 are May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: available at https://www.nwtime.org/support May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: ---------------------------------------------------- May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: proto: precision = 0.094 usec (-23) May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: basedate set to 2025-04-26 May 9 00:11:53.141812 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: gps base set to 2025-04-27 (week 2364) May 9 00:11:53.142434 jq[1886]: true May 9 00:11:53.130175 ntpd[1874]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 00:11:53.130185 ntpd[1874]: ---------------------------------------------------- May 9 00:11:53.130195 ntpd[1874]: ntp-4 is maintained by Network Time Foundation, May 9 00:11:53.130204 ntpd[1874]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 00:11:53.130213 ntpd[1874]: corporation. Support and training for ntp-4 are May 9 00:11:53.130222 ntpd[1874]: available at https://www.nwtime.org/support May 9 00:11:53.130232 ntpd[1874]: ---------------------------------------------------- May 9 00:11:53.145234 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Listen and drop on 0 v6wildcard [::]:123 May 9 00:11:53.145234 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 00:11:53.133849 ntpd[1874]: proto: precision = 0.094 usec (-23) May 9 00:11:53.134238 ntpd[1874]: basedate set to 2025-04-26 May 9 00:11:53.134257 ntpd[1874]: gps base set to 2025-04-27 (week 2364) May 9 00:11:53.144252 ntpd[1874]: Listen and drop on 0 v6wildcard [::]:123 May 9 00:11:53.144306 ntpd[1874]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 00:11:53.145829 ntpd[1874]: Listen normally on 2 lo 127.0.0.1:123 May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Listen normally on 2 lo 127.0.0.1:123 May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Listen normally on 3 eth0 172.31.17.17:123 May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Listen normally on 4 lo [::1]:123 May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: bind(21) AF_INET6 fe80::4c7:4aff:fe12:2c81%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: unable to create socket on eth0 (5) for fe80::4c7:4aff:fe12:2c81%2#123 May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: failed to init interface for address fe80::4c7:4aff:fe12:2c81%2 May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: Listening on routing socket on fd #21 for interface updates May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:11:53.150211 ntpd[1874]: 9 May 00:11:53 ntpd[1874]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:11:53.145883 ntpd[1874]: Listen normally on 3 eth0 172.31.17.17:123 May 9 00:11:53.145961 ntpd[1874]: Listen normally on 4 lo [::1]:123 May 9 00:11:53.146016 ntpd[1874]: bind(21) AF_INET6 fe80::4c7:4aff:fe12:2c81%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:11:53.146039 ntpd[1874]: unable to create socket on eth0 (5) for fe80::4c7:4aff:fe12:2c81%2#123 May 9 00:11:53.146057 ntpd[1874]: failed to init interface for address fe80::4c7:4aff:fe12:2c81%2 May 9 00:11:53.146127 ntpd[1874]: Listening on routing socket on fd #21 for interface updates May 9 00:11:53.147537 ntpd[1874]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:11:53.147565 ntpd[1874]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 00:11:53.151384 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:11:53.151618 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:11:53.167638 systemd-logind[1882]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:11:53.168121 systemd-logind[1882]: Watching system buttons on /dev/input/event2 (Sleep Button) May 9 00:11:53.168156 systemd-logind[1882]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:11:53.168567 systemd-logind[1882]: New seat seat0. May 9 00:11:53.168692 dbus-daemon[1870]: [system] SELinux support is enabled May 9 00:11:53.168922 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:11:53.175306 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:11:53.175349 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:11:53.175522 (ntainerd)[1898]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:11:53.176147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:11:53.176173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:11:53.177505 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:11:53.190579 extend-filesystems[1872]: Found loop4 May 9 00:11:53.190579 extend-filesystems[1872]: Found loop5 May 9 00:11:53.190579 extend-filesystems[1872]: Found loop6 May 9 00:11:53.190579 extend-filesystems[1872]: Found loop7 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p1 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p2 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p3 May 9 00:11:53.190579 extend-filesystems[1872]: Found usr May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p4 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p6 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p7 May 9 00:11:53.190579 extend-filesystems[1872]: Found nvme0n1p9 May 9 00:11:53.190579 extend-filesystems[1872]: Checking size of /dev/nvme0n1p9 May 9 00:11:53.191993 dbus-daemon[1870]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1815 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 9 00:11:53.224049 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 9 00:11:53.231269 update_engine[1884]: I20250509 00:11:53.199008 1884 main.cc:92] Flatcar Update Engine starting May 9 00:11:53.199498 dbus-daemon[1870]: [system] Successfully activated service 'org.freedesktop.systemd1' May 9 00:11:53.246101 tar[1888]: linux-amd64/helm May 9 00:11:53.250300 jq[1904]: true May 9 00:11:53.256996 update_engine[1884]: I20250509 00:11:53.256937 1884 update_check_scheduler.cc:74] Next update check in 10m36s May 9 00:11:53.270209 systemd[1]: Started update-engine.service - Update Engine. May 9 00:11:53.283504 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:11:53.287674 extend-filesystems[1872]: Resized partition /dev/nvme0n1p9 May 9 00:11:53.296483 systemd[1]: Finished setup-oem.service - Setup OEM. May 9 00:11:53.311101 extend-filesystems[1923]: resize2fs 1.47.1 (20-May-2024) May 9 00:11:53.325310 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 9 00:11:53.355978 coreos-metadata[1869]: May 09 00:11:53.353 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 00:11:53.357216 coreos-metadata[1869]: May 09 00:11:53.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 9 00:11:53.359116 coreos-metadata[1869]: May 09 00:11:53.358 INFO Fetch successful May 9 00:11:53.359116 coreos-metadata[1869]: May 09 00:11:53.358 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 9 00:11:53.361428 coreos-metadata[1869]: May 09 00:11:53.361 INFO Fetch successful May 9 00:11:53.361428 coreos-metadata[1869]: May 09 00:11:53.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 9 00:11:53.367636 coreos-metadata[1869]: May 09 00:11:53.365 INFO Fetch successful May 9 00:11:53.367636 coreos-metadata[1869]: May 09 00:11:53.365 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 9 00:11:53.369474 coreos-metadata[1869]: May 09 00:11:53.369 INFO Fetch successful May 9 00:11:53.369474 coreos-metadata[1869]: May 09 00:11:53.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 9 00:11:53.376021 coreos-metadata[1869]: May 09 00:11:53.373 INFO Fetch failed with 404: resource not found May 9 00:11:53.376021 coreos-metadata[1869]: May 09 00:11:53.373 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 9 00:11:53.376642 coreos-metadata[1869]: May 09 00:11:53.376 INFO Fetch successful May 9 00:11:53.376642 coreos-metadata[1869]: May 09 00:11:53.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 9 00:11:53.382566 coreos-metadata[1869]: May 09 00:11:53.379 INFO Fetch successful May 9 00:11:53.382566 coreos-metadata[1869]: May 09 00:11:53.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 9 00:11:53.382566 coreos-metadata[1869]: May 09 00:11:53.382 INFO Fetch successful May 9 00:11:53.382566 coreos-metadata[1869]: May 09 00:11:53.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 9 00:11:53.387440 coreos-metadata[1869]: May 09 00:11:53.383 INFO Fetch successful May 9 00:11:53.387440 coreos-metadata[1869]: May 09 00:11:53.384 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 9 00:11:53.387440 coreos-metadata[1869]: May 09 00:11:53.386 INFO Fetch successful May 9 00:11:53.486098 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 9 00:11:53.494099 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1649) May 9 00:11:53.498709 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 00:11:53.500780 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:11:53.511926 extend-filesystems[1923]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 9 00:11:53.511926 extend-filesystems[1923]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:11:53.511926 extend-filesystems[1923]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 9 00:11:53.507511 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:11:53.525900 bash[1954]: Updated "/home/core/.ssh/authorized_keys" May 9 00:11:53.526037 extend-filesystems[1872]: Resized filesystem in /dev/nvme0n1p9 May 9 00:11:53.507738 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:11:53.520565 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:11:53.547215 systemd[1]: Starting sshkeys.service... May 9 00:11:53.617547 dbus-daemon[1870]: [system] Successfully activated service 'org.freedesktop.hostname1' May 9 00:11:53.617753 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 9 00:11:53.619314 dbus-daemon[1870]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1914 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 9 00:11:53.629880 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 00:11:53.642302 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 00:11:53.653673 systemd[1]: Starting polkit.service - Authorization Manager... May 9 00:11:53.732872 polkitd[2008]: Started polkitd version 121 May 9 00:11:53.781805 polkitd[2008]: Loading rules from directory /etc/polkit-1/rules.d May 9 00:11:53.781901 polkitd[2008]: Loading rules from directory /usr/share/polkit-1/rules.d May 9 00:11:53.783328 polkitd[2008]: Finished loading, compiling and executing 2 rules May 9 00:11:53.790380 dbus-daemon[1870]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 9 00:11:53.790984 systemd[1]: Started polkit.service - Authorization Manager. May 9 00:11:53.792772 polkitd[2008]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 9 00:11:53.832606 locksmithd[1922]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:11:53.860795 systemd-hostnamed[1914]: Hostname set to (transient) May 9 00:11:53.861768 systemd-resolved[1818]: System hostname changed to 'ip-172-31-17-17'. May 9 00:11:53.890291 coreos-metadata[2006]: May 09 00:11:53.890 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 00:11:53.891870 coreos-metadata[2006]: May 09 00:11:53.891 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 9 00:11:53.894209 coreos-metadata[2006]: May 09 00:11:53.894 INFO Fetch successful May 9 00:11:53.894308 coreos-metadata[2006]: May 09 00:11:53.894 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 00:11:53.896704 coreos-metadata[2006]: May 09 00:11:53.896 INFO Fetch successful May 9 00:11:53.898139 unknown[2006]: wrote ssh authorized keys file for user: core May 9 00:11:53.943202 update-ssh-keys[2067]: Updated "/home/core/.ssh/authorized_keys" May 9 00:11:53.945820 containerd[1898]: time="2025-05-09T00:11:53.945269421Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 00:11:53.946391 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 00:11:53.954150 systemd[1]: Finished sshkeys.service. May 9 00:11:54.050144 containerd[1898]: time="2025-05-09T00:11:54.047638558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.056284 containerd[1898]: time="2025-05-09T00:11:54.056226713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:54.056450 containerd[1898]: time="2025-05-09T00:11:54.056428956Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:11:54.056551 containerd[1898]: time="2025-05-09T00:11:54.056535062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:11:54.056803 containerd[1898]: time="2025-05-09T00:11:54.056785602Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:11:54.056900 containerd[1898]: time="2025-05-09T00:11:54.056885743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.057043 containerd[1898]: time="2025-05-09T00:11:54.057023601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:54.057163 containerd[1898]: time="2025-05-09T00:11:54.057147924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.057485 containerd[1898]: time="2025-05-09T00:11:54.057460378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059108401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059150935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059167377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059301583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059570059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059776150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059796440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059894375Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:11:54.060099 containerd[1898]: time="2025-05-09T00:11:54.059951185Z" level=info msg="metadata content store policy set" policy=shared May 9 00:11:54.065966 containerd[1898]: time="2025-05-09T00:11:54.065918303Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:11:54.066180 containerd[1898]: time="2025-05-09T00:11:54.066157171Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:11:54.068174 containerd[1898]: time="2025-05-09T00:11:54.068145456Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.068284619Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.068312701Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.068513359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.068843846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.068968819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.068990922Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.069013071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.069036192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069097 containerd[1898]: time="2025-05-09T00:11:54.069055844Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069510 containerd[1898]: time="2025-05-09T00:11:54.069491121Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069586 containerd[1898]: time="2025-05-09T00:11:54.069572141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069659 containerd[1898]: time="2025-05-09T00:11:54.069645981Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069730 containerd[1898]: time="2025-05-09T00:11:54.069717452Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069797 containerd[1898]: time="2025-05-09T00:11:54.069783257Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069872 containerd[1898]: time="2025-05-09T00:11:54.069858593Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:11:54.069950 containerd[1898]: time="2025-05-09T00:11:54.069937854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:11:54.070022 containerd[1898]: time="2025-05-09T00:11:54.070008813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:11:54.070107 containerd[1898]: time="2025-05-09T00:11:54.070093672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:11:54.070193 containerd[1898]: time="2025-05-09T00:11:54.070180152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:11:54.070267 containerd[1898]: time="2025-05-09T00:11:54.070254269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071114751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071143304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071228695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071256764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071284240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071303518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071323217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071343063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071364128Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071399524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071420491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071436963Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071490158Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:11:54.072089 containerd[1898]: time="2025-05-09T00:11:54.071516044Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:11:54.072658 containerd[1898]: time="2025-05-09T00:11:54.071532279Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:11:54.072658 containerd[1898]: time="2025-05-09T00:11:54.071550794Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:11:54.072658 containerd[1898]: time="2025-05-09T00:11:54.071564744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:11:54.072658 containerd[1898]: time="2025-05-09T00:11:54.071582451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:11:54.072658 containerd[1898]: time="2025-05-09T00:11:54.071598075Z" level=info msg="NRI interface is disabled by configuration." May 9 00:11:54.072658 containerd[1898]: time="2025-05-09T00:11:54.071613047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:11:54.074127 containerd[1898]: time="2025-05-09T00:11:54.072025783Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:11:54.077104 containerd[1898]: time="2025-05-09T00:11:54.074486370Z" level=info msg="Connect containerd service" May 9 00:11:54.077104 containerd[1898]: time="2025-05-09T00:11:54.074548091Z" level=info msg="using legacy CRI server" May 9 00:11:54.077104 containerd[1898]: time="2025-05-09T00:11:54.074561745Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:11:54.077104 containerd[1898]: time="2025-05-09T00:11:54.074737984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:11:54.077979 containerd[1898]: time="2025-05-09T00:11:54.077891113Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:11:54.078255 containerd[1898]: time="2025-05-09T00:11:54.078220669Z" level=info msg="Start subscribing containerd event" May 9 00:11:54.078365 containerd[1898]: time="2025-05-09T00:11:54.078349827Z" level=info msg="Start recovering state" May 9 00:11:54.078515 containerd[1898]: time="2025-05-09T00:11:54.078501418Z" level=info msg="Start event monitor" May 9 00:11:54.080461 containerd[1898]: time="2025-05-09T00:11:54.080102130Z" level=info msg="Start snapshots syncer" May 9 00:11:54.080461 containerd[1898]: time="2025-05-09T00:11:54.080130058Z" level=info msg="Start cni network conf syncer for default" May 9 00:11:54.080461 containerd[1898]: time="2025-05-09T00:11:54.080141762Z" level=info msg="Start streaming server" May 9 00:11:54.080881 containerd[1898]: time="2025-05-09T00:11:54.080860247Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:11:54.081018 containerd[1898]: time="2025-05-09T00:11:54.081001365Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:11:54.081307 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:11:54.088098 containerd[1898]: time="2025-05-09T00:11:54.088041365Z" level=info msg="containerd successfully booted in 0.146314s" May 9 00:11:54.130624 ntpd[1874]: bind(24) AF_INET6 fe80::4c7:4aff:fe12:2c81%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:11:54.130673 ntpd[1874]: unable to create socket on eth0 (6) for fe80::4c7:4aff:fe12:2c81%2#123 May 9 00:11:54.131040 ntpd[1874]: 9 May 00:11:54 ntpd[1874]: bind(24) AF_INET6 fe80::4c7:4aff:fe12:2c81%2#123 flags 0x11 failed: Cannot assign requested address May 9 00:11:54.131040 ntpd[1874]: 9 May 00:11:54 ntpd[1874]: unable to create socket on eth0 (6) for fe80::4c7:4aff:fe12:2c81%2#123 May 9 00:11:54.131040 ntpd[1874]: 9 May 00:11:54 ntpd[1874]: failed to init interface for address fe80::4c7:4aff:fe12:2c81%2 May 9 00:11:54.130689 ntpd[1874]: failed to init interface for address fe80::4c7:4aff:fe12:2c81%2 May 9 00:11:54.282216 sshd_keygen[1913]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:11:54.319937 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:11:54.330196 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:11:54.342238 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:11:54.342512 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:11:54.352430 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:11:54.358278 systemd-networkd[1815]: eth0: Gained IPv6LL May 9 00:11:54.366502 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:11:54.370489 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:11:54.382566 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 9 00:11:54.389082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:11:54.397462 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:11:54.399768 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:11:54.408282 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:11:54.420817 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:11:54.422675 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:11:54.464585 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:11:54.480799 tar[1888]: linux-amd64/LICENSE May 9 00:11:54.482854 tar[1888]: linux-amd64/README.md May 9 00:11:54.483420 amazon-ssm-agent[2088]: Initializing new seelog logger May 9 00:11:54.483695 amazon-ssm-agent[2088]: New Seelog Logger Creation Complete May 9 00:11:54.483695 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.483695 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.484125 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 processing appconfig overrides May 9 00:11:54.485316 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.485316 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.485316 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 processing appconfig overrides May 9 00:11:54.485316 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.485316 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.485316 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 processing appconfig overrides May 9 00:11:54.485601 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO Proxy environment variables: May 9 00:11:54.489099 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.489099 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 00:11:54.489099 amazon-ssm-agent[2088]: 2025/05/09 00:11:54 processing appconfig overrides May 9 00:11:54.496660 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:11:54.585425 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO https_proxy: May 9 00:11:54.683187 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO http_proxy: May 9 00:11:54.783096 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO no_proxy: May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO Checking if agent identity type OnPrem can be assumed May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO Checking if agent identity type EC2 can be assumed May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO Agent will take identity from EC2 May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] Starting Core Agent May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [Registrar] Starting registrar module May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [EC2Identity] EC2 registration was successful. May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [CredentialRefresher] credentialRefresher has started May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [CredentialRefresher] Starting credentials refresher loop May 9 00:11:54.856840 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 9 00:11:54.881419 amazon-ssm-agent[2088]: 2025-05-09 00:11:54 INFO [CredentialRefresher] Next credential rotation will be in 31.658326165033333 minutes May 9 00:11:55.870887 amazon-ssm-agent[2088]: 2025-05-09 00:11:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 9 00:11:55.971479 amazon-ssm-agent[2088]: 2025-05-09 00:11:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2113) started May 9 00:11:56.072066 amazon-ssm-agent[2088]: 2025-05-09 00:11:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 9 00:11:56.281458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:11:56.282817 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:11:56.284739 systemd[1]: Startup finished in 578ms (kernel) + 7.061s (initrd) + 6.569s (userspace) = 14.210s. May 9 00:11:56.290258 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:11:56.671528 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:11:56.677371 systemd[1]: Started sshd@0-172.31.17.17:22-139.178.68.195:58162.service - OpenSSH per-connection server daemon (139.178.68.195:58162). May 9 00:11:56.856777 sshd[2139]: Accepted publickey for core from 139.178.68.195 port 58162 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:56.858904 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:56.870505 systemd-logind[1882]: New session 1 of user core. May 9 00:11:56.871483 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:11:56.879384 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:11:56.892142 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:11:56.898361 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:11:56.912421 (systemd)[2143]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:11:57.047008 systemd[2143]: Queued start job for default target default.target. May 9 00:11:57.055456 systemd[2143]: Created slice app.slice - User Application Slice. May 9 00:11:57.055502 systemd[2143]: Reached target paths.target - Paths. May 9 00:11:57.055525 systemd[2143]: Reached target timers.target - Timers. May 9 00:11:57.058114 systemd[2143]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:11:57.074159 systemd[2143]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:11:57.074317 systemd[2143]: Reached target sockets.target - Sockets. May 9 00:11:57.074340 systemd[2143]: Reached target basic.target - Basic System. May 9 00:11:57.074399 systemd[2143]: Reached target default.target - Main User Target. May 9 00:11:57.074440 systemd[2143]: Startup finished in 154ms. May 9 00:11:57.074542 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:11:57.079319 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:11:57.130674 ntpd[1874]: Listen normally on 7 eth0 [fe80::4c7:4aff:fe12:2c81%2]:123 May 9 00:11:57.131060 ntpd[1874]: 9 May 00:11:57 ntpd[1874]: Listen normally on 7 eth0 [fe80::4c7:4aff:fe12:2c81%2]:123 May 9 00:11:57.240847 systemd[1]: Started sshd@1-172.31.17.17:22-139.178.68.195:58174.service - OpenSSH per-connection server daemon (139.178.68.195:58174). May 9 00:11:57.410531 sshd[2156]: Accepted publickey for core from 139.178.68.195 port 58174 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:57.412049 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:57.418320 systemd-logind[1882]: New session 2 of user core. May 9 00:11:57.421390 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:11:57.507897 kubelet[2129]: E0509 00:11:57.507815 2129 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:11:57.510418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:11:57.510585 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:11:57.510846 systemd[1]: kubelet.service: Consumed 1.072s CPU time. May 9 00:11:57.546234 sshd[2158]: Connection closed by 139.178.68.195 port 58174 May 9 00:11:57.546777 sshd-session[2156]: pam_unix(sshd:session): session closed for user core May 9 00:11:57.550822 systemd[1]: sshd@1-172.31.17.17:22-139.178.68.195:58174.service: Deactivated successfully. May 9 00:11:57.552855 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:11:57.553635 systemd-logind[1882]: Session 2 logged out. Waiting for processes to exit. May 9 00:11:57.554715 systemd-logind[1882]: Removed session 2. May 9 00:11:57.585905 systemd[1]: Started sshd@2-172.31.17.17:22-139.178.68.195:58176.service - OpenSSH per-connection server daemon (139.178.68.195:58176). May 9 00:11:57.746594 sshd[2164]: Accepted publickey for core from 139.178.68.195 port 58176 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:57.747853 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:57.752852 systemd-logind[1882]: New session 3 of user core. May 9 00:11:57.759368 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:11:57.874190 sshd[2166]: Connection closed by 139.178.68.195 port 58176 May 9 00:11:57.874747 sshd-session[2164]: pam_unix(sshd:session): session closed for user core May 9 00:11:57.877795 systemd[1]: sshd@2-172.31.17.17:22-139.178.68.195:58176.service: Deactivated successfully. May 9 00:11:57.879569 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:11:57.880923 systemd-logind[1882]: Session 3 logged out. Waiting for processes to exit. May 9 00:11:57.882067 systemd-logind[1882]: Removed session 3. May 9 00:11:57.918625 systemd[1]: Started sshd@3-172.31.17.17:22-139.178.68.195:58186.service - OpenSSH per-connection server daemon (139.178.68.195:58186). May 9 00:11:58.077539 sshd[2171]: Accepted publickey for core from 139.178.68.195 port 58186 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:58.078920 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:58.083851 systemd-logind[1882]: New session 4 of user core. May 9 00:11:58.097336 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:11:58.214876 sshd[2173]: Connection closed by 139.178.68.195 port 58186 May 9 00:11:58.215471 sshd-session[2171]: pam_unix(sshd:session): session closed for user core May 9 00:11:58.218194 systemd[1]: sshd@3-172.31.17.17:22-139.178.68.195:58186.service: Deactivated successfully. May 9 00:11:58.219842 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:11:58.221242 systemd-logind[1882]: Session 4 logged out. Waiting for processes to exit. May 9 00:11:58.222443 systemd-logind[1882]: Removed session 4. May 9 00:11:58.247940 systemd[1]: Started sshd@4-172.31.17.17:22-139.178.68.195:58202.service - OpenSSH per-connection server daemon (139.178.68.195:58202). May 9 00:11:58.416895 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 58202 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:58.418202 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:58.423141 systemd-logind[1882]: New session 5 of user core. May 9 00:11:58.430609 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:11:58.545799 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:11:58.546125 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:11:58.562983 sudo[2181]: pam_unix(sudo:session): session closed for user root May 9 00:11:58.586239 sshd[2180]: Connection closed by 139.178.68.195 port 58202 May 9 00:11:58.587009 sshd-session[2178]: pam_unix(sshd:session): session closed for user core May 9 00:11:58.590990 systemd[1]: sshd@4-172.31.17.17:22-139.178.68.195:58202.service: Deactivated successfully. May 9 00:11:58.592842 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:11:58.593582 systemd-logind[1882]: Session 5 logged out. Waiting for processes to exit. May 9 00:11:58.594829 systemd-logind[1882]: Removed session 5. May 9 00:11:58.619252 systemd[1]: Started sshd@5-172.31.17.17:22-139.178.68.195:58206.service - OpenSSH per-connection server daemon (139.178.68.195:58206). May 9 00:11:58.788957 sshd[2186]: Accepted publickey for core from 139.178.68.195 port 58206 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:58.790623 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:58.795715 systemd-logind[1882]: New session 6 of user core. May 9 00:11:58.800324 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:11:58.914003 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:11:58.914816 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:11:58.931962 sudo[2190]: pam_unix(sudo:session): session closed for user root May 9 00:11:58.955025 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 00:11:58.955467 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:11:59.008784 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:11:59.070404 augenrules[2212]: No rules May 9 00:11:59.070975 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:11:59.071241 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:11:59.073613 sudo[2189]: pam_unix(sudo:session): session closed for user root May 9 00:11:59.096828 sshd[2188]: Connection closed by 139.178.68.195 port 58206 May 9 00:11:59.097800 sshd-session[2186]: pam_unix(sshd:session): session closed for user core May 9 00:11:59.102153 systemd[1]: sshd@5-172.31.17.17:22-139.178.68.195:58206.service: Deactivated successfully. May 9 00:11:59.104172 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:11:59.105867 systemd-logind[1882]: Session 6 logged out. Waiting for processes to exit. May 9 00:11:59.107378 systemd-logind[1882]: Removed session 6. May 9 00:11:59.128954 systemd[1]: Started sshd@6-172.31.17.17:22-139.178.68.195:58208.service - OpenSSH per-connection server daemon (139.178.68.195:58208). May 9 00:11:59.344099 sshd[2220]: Accepted publickey for core from 139.178.68.195 port 58208 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:11:59.345236 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:11:59.350622 systemd-logind[1882]: New session 7 of user core. May 9 00:11:59.360348 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:11:59.461338 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:11:59.461654 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:11:59.857511 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:11:59.858356 (dockerd)[2240]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:12:01.315452 systemd-resolved[1818]: Clock change detected. Flushing caches. May 9 00:12:01.703991 dockerd[2240]: time="2025-05-09T00:12:01.703672749Z" level=info msg="Starting up" May 9 00:12:01.900262 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4063873042-merged.mount: Deactivated successfully. May 9 00:12:02.141806 dockerd[2240]: time="2025-05-09T00:12:02.141657798Z" level=info msg="Loading containers: start." May 9 00:12:02.550671 kernel: Initializing XFRM netlink socket May 9 00:12:02.620801 (udev-worker)[2346]: Network interface NamePolicy= disabled on kernel command line. May 9 00:12:02.773606 systemd-networkd[1815]: docker0: Link UP May 9 00:12:02.830780 dockerd[2240]: time="2025-05-09T00:12:02.830733433Z" level=info msg="Loading containers: done." May 9 00:12:02.878539 dockerd[2240]: time="2025-05-09T00:12:02.877889441Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:12:02.878539 dockerd[2240]: time="2025-05-09T00:12:02.878041742Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 00:12:02.878539 dockerd[2240]: time="2025-05-09T00:12:02.878215115Z" level=info msg="Daemon has completed initialization" May 9 00:12:02.988410 dockerd[2240]: time="2025-05-09T00:12:02.988336877Z" level=info msg="API listen on /run/docker.sock" May 9 00:12:02.988720 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:12:04.500452 containerd[1898]: time="2025-05-09T00:12:04.499115681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 00:12:05.076902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213649141.mount: Deactivated successfully. May 9 00:12:07.018234 containerd[1898]: time="2025-05-09T00:12:07.018182004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:07.019559 containerd[1898]: time="2025-05-09T00:12:07.019314086Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 9 00:12:07.020806 containerd[1898]: time="2025-05-09T00:12:07.020485260Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:07.023798 containerd[1898]: time="2025-05-09T00:12:07.023755921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:07.024847 containerd[1898]: time="2025-05-09T00:12:07.024806699Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.525621467s" May 9 00:12:07.024946 containerd[1898]: time="2025-05-09T00:12:07.024856052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 00:12:07.049807 containerd[1898]: time="2025-05-09T00:12:07.049773404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 00:12:08.928540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:12:08.937400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:09.200011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:09.211689 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:09.295304 kubelet[2504]: E0509 00:12:09.295178 2504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:09.301759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:09.301968 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:09.456734 containerd[1898]: time="2025-05-09T00:12:09.456382250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:09.457739 containerd[1898]: time="2025-05-09T00:12:09.457498520Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 9 00:12:09.459448 containerd[1898]: time="2025-05-09T00:12:09.459009978Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:09.462037 containerd[1898]: time="2025-05-09T00:12:09.461999307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:09.463482 containerd[1898]: time="2025-05-09T00:12:09.463429242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.413455613s" May 9 00:12:09.463482 containerd[1898]: time="2025-05-09T00:12:09.463482797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 00:12:09.490821 containerd[1898]: time="2025-05-09T00:12:09.490788967Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 00:12:10.972082 containerd[1898]: time="2025-05-09T00:12:10.972021085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:10.973346 containerd[1898]: time="2025-05-09T00:12:10.973185522Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 9 00:12:10.975036 containerd[1898]: time="2025-05-09T00:12:10.974500425Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:10.977890 containerd[1898]: time="2025-05-09T00:12:10.977844009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:10.979064 containerd[1898]: time="2025-05-09T00:12:10.979020549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.488076228s" May 9 00:12:10.979205 containerd[1898]: time="2025-05-09T00:12:10.979068892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 00:12:11.006264 containerd[1898]: time="2025-05-09T00:12:11.006219305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:12:12.075991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878047100.mount: Deactivated successfully. May 9 00:12:12.586141 containerd[1898]: time="2025-05-09T00:12:12.586086538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:12.587381 containerd[1898]: time="2025-05-09T00:12:12.587183920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 9 00:12:12.589425 containerd[1898]: time="2025-05-09T00:12:12.588353460Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:12.591181 containerd[1898]: time="2025-05-09T00:12:12.590519865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:12.591181 containerd[1898]: time="2025-05-09T00:12:12.591030288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.584774407s" May 9 00:12:12.591181 containerd[1898]: time="2025-05-09T00:12:12.591060722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 00:12:12.617054 containerd[1898]: time="2025-05-09T00:12:12.617015176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:12:13.250871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431372013.mount: Deactivated successfully. May 9 00:12:14.192748 containerd[1898]: time="2025-05-09T00:12:14.192679801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:14.193948 containerd[1898]: time="2025-05-09T00:12:14.193735571Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 9 00:12:14.195189 containerd[1898]: time="2025-05-09T00:12:14.195104334Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:14.198495 containerd[1898]: time="2025-05-09T00:12:14.198450631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:14.199907 containerd[1898]: time="2025-05-09T00:12:14.199414978Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.58236278s" May 9 00:12:14.199907 containerd[1898]: time="2025-05-09T00:12:14.199449412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:12:14.224275 containerd[1898]: time="2025-05-09T00:12:14.224224922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 00:12:14.702055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692297766.mount: Deactivated successfully. May 9 00:12:14.708599 containerd[1898]: time="2025-05-09T00:12:14.708542722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:14.709504 containerd[1898]: time="2025-05-09T00:12:14.709394336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 9 00:12:14.711924 containerd[1898]: time="2025-05-09T00:12:14.710527058Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:14.713957 containerd[1898]: time="2025-05-09T00:12:14.713142423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:14.713957 containerd[1898]: time="2025-05-09T00:12:14.713828970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 489.561398ms" May 9 00:12:14.713957 containerd[1898]: time="2025-05-09T00:12:14.713853807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 00:12:14.737023 containerd[1898]: time="2025-05-09T00:12:14.736986999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 00:12:15.280793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130494444.mount: Deactivated successfully. May 9 00:12:18.029390 containerd[1898]: time="2025-05-09T00:12:18.029322725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:18.036551 containerd[1898]: time="2025-05-09T00:12:18.036490012Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 9 00:12:18.049174 containerd[1898]: time="2025-05-09T00:12:18.049088973Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:18.065156 containerd[1898]: time="2025-05-09T00:12:18.064334336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:18.065156 containerd[1898]: time="2025-05-09T00:12:18.064952002Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.327923196s" May 9 00:12:18.065156 containerd[1898]: time="2025-05-09T00:12:18.064988422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 00:12:19.552517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:12:19.562285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:19.793510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:19.796354 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:19.878688 kubelet[2712]: E0509 00:12:19.878570 2712 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:19.882851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:19.883057 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:21.482710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:21.491546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:21.519454 systemd[1]: Reloading requested from client PID 2727 ('systemctl') (unit session-7.scope)... May 9 00:12:21.519827 systemd[1]: Reloading... May 9 00:12:21.614202 zram_generator::config[2766]: No configuration found. May 9 00:12:21.783468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:12:21.871490 systemd[1]: Reloading finished in 351 ms. May 9 00:12:21.927492 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:12:21.927786 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:12:21.928193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:21.935706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:22.134950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:22.149607 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:12:22.203351 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:12:22.203351 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:12:22.203351 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:12:22.204015 kubelet[2831]: I0509 00:12:22.203412 2831 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:12:22.488959 kubelet[2831]: I0509 00:12:22.488455 2831 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:12:22.488959 kubelet[2831]: I0509 00:12:22.488494 2831 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:12:22.488959 kubelet[2831]: I0509 00:12:22.488718 2831 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:12:22.521391 kubelet[2831]: I0509 00:12:22.521357 2831 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:12:22.525711 kubelet[2831]: E0509 00:12:22.525674 2831 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.543724 kubelet[2831]: I0509 00:12:22.543657 2831 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:12:22.546795 kubelet[2831]: I0509 00:12:22.546734 2831 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:12:22.546988 kubelet[2831]: I0509 00:12:22.546783 2831 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-17","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:12:22.546988 kubelet[2831]: I0509 00:12:22.546983 2831 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:12:22.546988 kubelet[2831]: I0509 00:12:22.546993 2831 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:12:22.549323 kubelet[2831]: I0509 00:12:22.549290 2831 state_mem.go:36] "Initialized new in-memory state store" May 9 00:12:22.550457 kubelet[2831]: I0509 00:12:22.550431 2831 kubelet.go:400] "Attempting to sync node with API server" May 9 00:12:22.550457 kubelet[2831]: I0509 00:12:22.550458 2831 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:12:22.550583 kubelet[2831]: I0509 00:12:22.550480 2831 kubelet.go:312] "Adding apiserver pod source" May 9 00:12:22.550583 kubelet[2831]: I0509 00:12:22.550498 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:12:22.557703 kubelet[2831]: W0509 00:12:22.557287 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.557703 kubelet[2831]: E0509 00:12:22.557340 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.557703 kubelet[2831]: W0509 00:12:22.557393 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-17&limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.557703 kubelet[2831]: E0509 00:12:22.557416 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-17&limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.557703 kubelet[2831]: I0509 00:12:22.557503 2831 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:12:22.560072 kubelet[2831]: I0509 00:12:22.559969 2831 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:12:22.560072 kubelet[2831]: W0509 00:12:22.560046 2831 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:12:22.560875 kubelet[2831]: I0509 00:12:22.560842 2831 server.go:1264] "Started kubelet" May 9 00:12:22.565582 kubelet[2831]: I0509 00:12:22.565544 2831 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:12:22.568383 kubelet[2831]: I0509 00:12:22.566923 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:12:22.568383 kubelet[2831]: I0509 00:12:22.567300 2831 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:12:22.568383 kubelet[2831]: E0509 00:12:22.567440 2831 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.17:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-17.183db3744be9e6f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-17,UID:ip-172-31-17-17,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-17,},FirstTimestamp:2025-05-09 00:12:22.560818934 +0000 UTC m=+0.406669600,LastTimestamp:2025-05-09 00:12:22.560818934 +0000 UTC m=+0.406669600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-17,}" May 9 00:12:22.571352 kubelet[2831]: I0509 00:12:22.571302 2831 server.go:455] "Adding debug handlers to kubelet server" May 9 00:12:22.573697 kubelet[2831]: I0509 00:12:22.573652 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:12:22.576568 kubelet[2831]: E0509 00:12:22.575848 2831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-17-17\" not found" May 9 00:12:22.576568 kubelet[2831]: I0509 00:12:22.575886 2831 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:12:22.578053 kubelet[2831]: I0509 00:12:22.577628 2831 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:12:22.578053 kubelet[2831]: I0509 00:12:22.577714 2831 reconciler.go:26] "Reconciler: start to sync state" May 9 00:12:22.578184 kubelet[2831]: W0509 00:12:22.578075 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.578184 kubelet[2831]: E0509 00:12:22.578115 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.584192 kubelet[2831]: E0509 00:12:22.583420 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-17?timeout=10s\": dial tcp 172.31.17.17:6443: connect: connection refused" interval="200ms" May 9 00:12:22.588024 kubelet[2831]: E0509 00:12:22.587874 2831 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:12:22.588673 kubelet[2831]: I0509 00:12:22.588568 2831 factory.go:221] Registration of the systemd container factory successfully May 9 00:12:22.588937 kubelet[2831]: I0509 00:12:22.588710 2831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:12:22.596195 kubelet[2831]: I0509 00:12:22.594769 2831 factory.go:221] Registration of the containerd container factory successfully May 9 00:12:22.622775 kubelet[2831]: I0509 00:12:22.622736 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:12:22.624363 kubelet[2831]: I0509 00:12:22.624336 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:12:22.624507 kubelet[2831]: I0509 00:12:22.624498 2831 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:12:22.624571 kubelet[2831]: I0509 00:12:22.624564 2831 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:12:22.624671 kubelet[2831]: E0509 00:12:22.624656 2831 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:12:22.631538 kubelet[2831]: I0509 00:12:22.631516 2831 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:12:22.631538 kubelet[2831]: I0509 00:12:22.631532 2831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:12:22.631538 kubelet[2831]: I0509 00:12:22.631548 2831 state_mem.go:36] "Initialized new in-memory state store" May 9 00:12:22.631898 kubelet[2831]: W0509 00:12:22.631491 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.632058 kubelet[2831]: E0509 00:12:22.632046 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:22.634403 kubelet[2831]: I0509 00:12:22.634376 2831 policy_none.go:49] "None policy: Start" May 9 00:12:22.635525 kubelet[2831]: I0509 00:12:22.635509 2831 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:12:22.635768 kubelet[2831]: I0509 00:12:22.635759 2831 state_mem.go:35] "Initializing new in-memory state store" May 9 00:12:22.645265 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:12:22.662759 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:12:22.666464 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:12:22.675097 kubelet[2831]: I0509 00:12:22.675072 2831 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:12:22.677210 kubelet[2831]: I0509 00:12:22.675942 2831 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:12:22.677210 kubelet[2831]: I0509 00:12:22.676045 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:12:22.679089 kubelet[2831]: I0509 00:12:22.679055 2831 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-17" May 9 00:12:22.679985 kubelet[2831]: E0509 00:12:22.679955 2831 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.17:6443/api/v1/nodes\": dial tcp 172.31.17.17:6443: connect: connection refused" node="ip-172-31-17-17" May 9 00:12:22.680970 kubelet[2831]: E0509 00:12:22.680520 2831 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-17\" not found" May 9 00:12:22.726432 kubelet[2831]: I0509 00:12:22.725379 2831 topology_manager.go:215] "Topology Admit Handler" podUID="e4e59234c3093c76c7f2cb7bf3cfed26" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-17" May 9 00:12:22.727818 kubelet[2831]: I0509 00:12:22.727770 2831 topology_manager.go:215] "Topology Admit Handler" podUID="ac5486e862ce0298641057b70c6f16f4" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-17" May 9 00:12:22.729348 kubelet[2831]: I0509 00:12:22.729095 2831 topology_manager.go:215] "Topology Admit Handler" podUID="aba5d8c6014c77edabda99e72e556e00" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-17" May 9 00:12:22.736054 systemd[1]: Created slice kubepods-burstable-pode4e59234c3093c76c7f2cb7bf3cfed26.slice - libcontainer container kubepods-burstable-pode4e59234c3093c76c7f2cb7bf3cfed26.slice. May 9 00:12:22.754851 systemd[1]: Created slice kubepods-burstable-podac5486e862ce0298641057b70c6f16f4.slice - libcontainer container kubepods-burstable-podac5486e862ce0298641057b70c6f16f4.slice. May 9 00:12:22.766678 systemd[1]: Created slice kubepods-burstable-podaba5d8c6014c77edabda99e72e556e00.slice - libcontainer container kubepods-burstable-podaba5d8c6014c77edabda99e72e556e00.slice. May 9 00:12:22.778939 kubelet[2831]: I0509 00:12:22.778900 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:22.778939 kubelet[2831]: I0509 00:12:22.778939 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:22.778939 kubelet[2831]: I0509 00:12:22.778962 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4e59234c3093c76c7f2cb7bf3cfed26-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-17\" (UID: \"e4e59234c3093c76c7f2cb7bf3cfed26\") " pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:22.778939 kubelet[2831]: I0509 00:12:22.778979 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4e59234c3093c76c7f2cb7bf3cfed26-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-17\" (UID: \"e4e59234c3093c76c7f2cb7bf3cfed26\") " pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:22.778939 kubelet[2831]: I0509 00:12:22.778998 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:22.779332 kubelet[2831]: I0509 00:12:22.779034 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aba5d8c6014c77edabda99e72e556e00-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-17\" (UID: \"aba5d8c6014c77edabda99e72e556e00\") " pod="kube-system/kube-scheduler-ip-172-31-17-17" May 9 00:12:22.779332 kubelet[2831]: I0509 00:12:22.779068 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4e59234c3093c76c7f2cb7bf3cfed26-ca-certs\") pod \"kube-apiserver-ip-172-31-17-17\" (UID: \"e4e59234c3093c76c7f2cb7bf3cfed26\") " pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:22.779332 kubelet[2831]: I0509 00:12:22.779100 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:22.779332 kubelet[2831]: I0509 00:12:22.779118 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:22.784354 kubelet[2831]: E0509 00:12:22.784301 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-17?timeout=10s\": dial tcp 172.31.17.17:6443: connect: connection refused" interval="400ms" May 9 00:12:22.881570 kubelet[2831]: I0509 00:12:22.881536 2831 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-17" May 9 00:12:22.881986 kubelet[2831]: E0509 00:12:22.881949 2831 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.17:6443/api/v1/nodes\": dial tcp 172.31.17.17:6443: connect: connection refused" node="ip-172-31-17-17" May 9 00:12:23.054714 containerd[1898]: time="2025-05-09T00:12:23.054594740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-17,Uid:e4e59234c3093c76c7f2cb7bf3cfed26,Namespace:kube-system,Attempt:0,}" May 9 00:12:23.064751 containerd[1898]: time="2025-05-09T00:12:23.064706185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-17,Uid:ac5486e862ce0298641057b70c6f16f4,Namespace:kube-system,Attempt:0,}" May 9 00:12:23.070034 containerd[1898]: time="2025-05-09T00:12:23.069995590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-17,Uid:aba5d8c6014c77edabda99e72e556e00,Namespace:kube-system,Attempt:0,}" May 9 00:12:23.184844 kubelet[2831]: E0509 00:12:23.184795 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-17?timeout=10s\": dial tcp 172.31.17.17:6443: connect: connection refused" interval="800ms" May 9 00:12:23.284055 kubelet[2831]: I0509 00:12:23.284022 2831 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-17" May 9 00:12:23.284449 kubelet[2831]: E0509 00:12:23.284340 2831 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.17:6443/api/v1/nodes\": dial tcp 172.31.17.17:6443: connect: connection refused" node="ip-172-31-17-17" May 9 00:12:23.504156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117454578.mount: Deactivated successfully. May 9 00:12:23.512605 containerd[1898]: time="2025-05-09T00:12:23.511800245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:12:23.516109 containerd[1898]: time="2025-05-09T00:12:23.516037024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:12:23.517036 containerd[1898]: time="2025-05-09T00:12:23.516992657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:12:23.518525 containerd[1898]: time="2025-05-09T00:12:23.518477875Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:12:23.519454 containerd[1898]: time="2025-05-09T00:12:23.519410924Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:12:23.520049 containerd[1898]: time="2025-05-09T00:12:23.519939930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:12:23.521092 containerd[1898]: time="2025-05-09T00:12:23.521018716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:12:23.523187 containerd[1898]: time="2025-05-09T00:12:23.522338114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:12:23.523882 containerd[1898]: time="2025-05-09T00:12:23.523854962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.768939ms" May 9 00:12:23.524813 containerd[1898]: time="2025-05-09T00:12:23.524781064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.086821ms" May 9 00:12:23.534293 containerd[1898]: time="2025-05-09T00:12:23.534226629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.421123ms" May 9 00:12:23.582777 kubelet[2831]: W0509 00:12:23.581601 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.582777 kubelet[2831]: E0509 00:12:23.581676 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.707520 containerd[1898]: time="2025-05-09T00:12:23.707408523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:12:23.707520 containerd[1898]: time="2025-05-09T00:12:23.707469300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:12:23.707520 containerd[1898]: time="2025-05-09T00:12:23.707485121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:23.708258 containerd[1898]: time="2025-05-09T00:12:23.707567153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:23.708258 containerd[1898]: time="2025-05-09T00:12:23.707398291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:12:23.708573 containerd[1898]: time="2025-05-09T00:12:23.707922716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:12:23.708573 containerd[1898]: time="2025-05-09T00:12:23.707943306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:23.709474 containerd[1898]: time="2025-05-09T00:12:23.709112149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:23.715936 containerd[1898]: time="2025-05-09T00:12:23.715614805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:12:23.715936 containerd[1898]: time="2025-05-09T00:12:23.715669695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:12:23.715936 containerd[1898]: time="2025-05-09T00:12:23.715693200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:23.715936 containerd[1898]: time="2025-05-09T00:12:23.715772246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:23.733658 systemd[1]: Started cri-containerd-30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c.scope - libcontainer container 30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c. May 9 00:12:23.750364 systemd[1]: Started cri-containerd-c291050ae6a5148c905fb82b014c62a4cfc314bccbb87feb3011ddc947ec0dfa.scope - libcontainer container c291050ae6a5148c905fb82b014c62a4cfc314bccbb87feb3011ddc947ec0dfa. May 9 00:12:23.759359 systemd[1]: Started cri-containerd-f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f.scope - libcontainer container f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f. May 9 00:12:23.819053 containerd[1898]: time="2025-05-09T00:12:23.818542271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-17,Uid:ac5486e862ce0298641057b70c6f16f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f\"" May 9 00:12:23.829476 containerd[1898]: time="2025-05-09T00:12:23.829388053Z" level=info msg="CreateContainer within sandbox \"f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:12:23.839483 containerd[1898]: time="2025-05-09T00:12:23.839401077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-17,Uid:e4e59234c3093c76c7f2cb7bf3cfed26,Namespace:kube-system,Attempt:0,} returns sandbox id \"c291050ae6a5148c905fb82b014c62a4cfc314bccbb87feb3011ddc947ec0dfa\"" May 9 00:12:23.848494 containerd[1898]: time="2025-05-09T00:12:23.848445913Z" level=info msg="CreateContainer within sandbox \"c291050ae6a5148c905fb82b014c62a4cfc314bccbb87feb3011ddc947ec0dfa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:12:23.849083 containerd[1898]: time="2025-05-09T00:12:23.848819573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-17,Uid:aba5d8c6014c77edabda99e72e556e00,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c\"" May 9 00:12:23.862263 kubelet[2831]: W0509 00:12:23.862012 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.862263 kubelet[2831]: E0509 00:12:23.862102 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.864894 containerd[1898]: time="2025-05-09T00:12:23.864756601Z" level=info msg="CreateContainer within sandbox \"30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:12:23.869901 kubelet[2831]: W0509 00:12:23.869831 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.869901 kubelet[2831]: E0509 00:12:23.869904 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.893151 containerd[1898]: time="2025-05-09T00:12:23.893102598Z" level=info msg="CreateContainer within sandbox \"c291050ae6a5148c905fb82b014c62a4cfc314bccbb87feb3011ddc947ec0dfa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42677d0794d8219aaffc686c0f5b835d1fe1259e7f49d504d8f46902532d3816\"" May 9 00:12:23.893726 containerd[1898]: time="2025-05-09T00:12:23.893683326Z" level=info msg="StartContainer for \"42677d0794d8219aaffc686c0f5b835d1fe1259e7f49d504d8f46902532d3816\"" May 9 00:12:23.900133 containerd[1898]: time="2025-05-09T00:12:23.900004945Z" level=info msg="CreateContainer within sandbox \"f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6\"" May 9 00:12:23.900474 containerd[1898]: time="2025-05-09T00:12:23.900397877Z" level=info msg="CreateContainer within sandbox \"30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe\"" May 9 00:12:23.900785 containerd[1898]: time="2025-05-09T00:12:23.900739864Z" level=info msg="StartContainer for \"ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6\"" May 9 00:12:23.901216 containerd[1898]: time="2025-05-09T00:12:23.900987301Z" level=info msg="StartContainer for \"715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe\"" May 9 00:12:23.934019 systemd[1]: Started cri-containerd-42677d0794d8219aaffc686c0f5b835d1fe1259e7f49d504d8f46902532d3816.scope - libcontainer container 42677d0794d8219aaffc686c0f5b835d1fe1259e7f49d504d8f46902532d3816. May 9 00:12:23.956481 systemd[1]: Started cri-containerd-ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6.scope - libcontainer container ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6. May 9 00:12:23.975988 kubelet[2831]: W0509 00:12:23.975764 2831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-17&limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.976358 kubelet[2831]: E0509 00:12:23.976321 2831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-17&limit=500&resourceVersion=0": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:23.978070 systemd[1]: Started cri-containerd-715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe.scope - libcontainer container 715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe. May 9 00:12:23.987424 kubelet[2831]: E0509 00:12:23.986571 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-17?timeout=10s\": dial tcp 172.31.17.17:6443: connect: connection refused" interval="1.6s" May 9 00:12:24.023773 containerd[1898]: time="2025-05-09T00:12:24.023454618Z" level=info msg="StartContainer for \"42677d0794d8219aaffc686c0f5b835d1fe1259e7f49d504d8f46902532d3816\" returns successfully" May 9 00:12:24.064911 containerd[1898]: time="2025-05-09T00:12:24.064752722Z" level=info msg="StartContainer for \"ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6\" returns successfully" May 9 00:12:24.077684 containerd[1898]: time="2025-05-09T00:12:24.077643754Z" level=info msg="StartContainer for \"715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe\" returns successfully" May 9 00:12:24.087764 kubelet[2831]: I0509 00:12:24.087407 2831 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-17" May 9 00:12:24.088819 kubelet[2831]: E0509 00:12:24.088784 2831 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.17:6443/api/v1/nodes\": dial tcp 172.31.17.17:6443: connect: connection refused" node="ip-172-31-17-17" May 9 00:12:24.556157 kubelet[2831]: E0509 00:12:24.556113 2831 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.17:6443: connect: connection refused May 9 00:12:25.063082 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 9 00:12:25.691986 kubelet[2831]: I0509 00:12:25.691846 2831 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-17" May 9 00:12:26.559724 kubelet[2831]: I0509 00:12:26.559677 2831 apiserver.go:52] "Watching apiserver" May 9 00:12:26.633513 kubelet[2831]: E0509 00:12:26.633451 2831 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-17\" not found" node="ip-172-31-17-17" May 9 00:12:26.679130 kubelet[2831]: I0509 00:12:26.679009 2831 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:12:26.752840 kubelet[2831]: I0509 00:12:26.752796 2831 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-17" May 9 00:12:28.586646 systemd[1]: Reloading requested from client PID 3110 ('systemctl') (unit session-7.scope)... May 9 00:12:28.586665 systemd[1]: Reloading... May 9 00:12:28.708211 zram_generator::config[3149]: No configuration found. May 9 00:12:28.853551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:12:28.956128 systemd[1]: Reloading finished in 368 ms. May 9 00:12:28.993955 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:29.004627 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:12:29.004937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:29.013554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:29.249748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:29.260630 (kubelet)[3210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:12:29.348469 kubelet[3210]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:12:29.348469 kubelet[3210]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:12:29.348469 kubelet[3210]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:12:29.349063 kubelet[3210]: I0509 00:12:29.348549 3210 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:12:29.355002 kubelet[3210]: I0509 00:12:29.354952 3210 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:12:29.355002 kubelet[3210]: I0509 00:12:29.354981 3210 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:12:29.355812 kubelet[3210]: I0509 00:12:29.355787 3210 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:12:29.361476 kubelet[3210]: I0509 00:12:29.361357 3210 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:12:29.365775 kubelet[3210]: I0509 00:12:29.365744 3210 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:12:29.375526 kubelet[3210]: I0509 00:12:29.375485 3210 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:12:29.375929 kubelet[3210]: I0509 00:12:29.375884 3210 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:12:29.376138 kubelet[3210]: I0509 00:12:29.375920 3210 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-17","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:12:29.376301 kubelet[3210]: I0509 00:12:29.376152 3210 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:12:29.376301 kubelet[3210]: I0509 00:12:29.376202 3210 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:12:29.376301 kubelet[3210]: I0509 00:12:29.376260 3210 state_mem.go:36] "Initialized new in-memory state store" May 9 00:12:29.376433 kubelet[3210]: I0509 00:12:29.376390 3210 kubelet.go:400] "Attempting to sync node with API server" May 9 00:12:29.376433 kubelet[3210]: I0509 00:12:29.376409 3210 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:12:29.379185 kubelet[3210]: I0509 00:12:29.379145 3210 kubelet.go:312] "Adding apiserver pod source" May 9 00:12:29.379310 kubelet[3210]: I0509 00:12:29.379195 3210 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:12:29.390205 kubelet[3210]: I0509 00:12:29.389710 3210 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:12:29.392897 kubelet[3210]: I0509 00:12:29.392861 3210 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:12:29.394033 kubelet[3210]: I0509 00:12:29.393496 3210 server.go:1264] "Started kubelet" May 9 00:12:29.401531 kubelet[3210]: I0509 00:12:29.401467 3210 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:12:29.401841 kubelet[3210]: I0509 00:12:29.401818 3210 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:12:29.403034 kubelet[3210]: I0509 00:12:29.403012 3210 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:12:29.405177 kubelet[3210]: I0509 00:12:29.401870 3210 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:12:29.411691 kubelet[3210]: I0509 00:12:29.411133 3210 server.go:455] "Adding debug handlers to kubelet server" May 9 00:12:29.411691 kubelet[3210]: I0509 00:12:29.411386 3210 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:12:29.416045 kubelet[3210]: I0509 00:12:29.415950 3210 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:12:29.423108 kubelet[3210]: I0509 00:12:29.422597 3210 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:12:29.424669 kubelet[3210]: E0509 00:12:29.424637 3210 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:12:29.425829 kubelet[3210]: I0509 00:12:29.425592 3210 reconciler.go:26] "Reconciler: start to sync state" May 9 00:12:29.435226 kubelet[3210]: I0509 00:12:29.435195 3210 factory.go:221] Registration of the containerd container factory successfully May 9 00:12:29.435407 kubelet[3210]: I0509 00:12:29.435393 3210 factory.go:221] Registration of the systemd container factory successfully May 9 00:12:29.441504 kubelet[3210]: I0509 00:12:29.441435 3210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:12:29.446273 kubelet[3210]: I0509 00:12:29.445676 3210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:12:29.446273 kubelet[3210]: I0509 00:12:29.445721 3210 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:12:29.446273 kubelet[3210]: I0509 00:12:29.445744 3210 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:12:29.446273 kubelet[3210]: E0509 00:12:29.445796 3210 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:12:29.503223 kubelet[3210]: I0509 00:12:29.502335 3210 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:12:29.503223 kubelet[3210]: I0509 00:12:29.502378 3210 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:12:29.503223 kubelet[3210]: I0509 00:12:29.502401 3210 state_mem.go:36] "Initialized new in-memory state store" May 9 00:12:29.503223 kubelet[3210]: I0509 00:12:29.502595 3210 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:12:29.503223 kubelet[3210]: I0509 00:12:29.502608 3210 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:12:29.503223 kubelet[3210]: I0509 00:12:29.502632 3210 policy_none.go:49] "None policy: Start" May 9 00:12:29.504941 kubelet[3210]: I0509 00:12:29.504919 3210 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:12:29.505042 kubelet[3210]: I0509 00:12:29.504948 3210 state_mem.go:35] "Initializing new in-memory state store" May 9 00:12:29.505148 kubelet[3210]: I0509 00:12:29.505130 3210 state_mem.go:75] "Updated machine memory state" May 9 00:12:29.509999 kubelet[3210]: I0509 00:12:29.509815 3210 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:12:29.510107 kubelet[3210]: I0509 00:12:29.510004 3210 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:12:29.510195 kubelet[3210]: I0509 00:12:29.510108 3210 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:12:29.516969 kubelet[3210]: I0509 00:12:29.516738 3210 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-17" May 9 00:12:29.526980 kubelet[3210]: I0509 00:12:29.526953 3210 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-17" May 9 00:12:29.527299 kubelet[3210]: I0509 00:12:29.527288 3210 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-17" May 9 00:12:29.546754 kubelet[3210]: I0509 00:12:29.546331 3210 topology_manager.go:215] "Topology Admit Handler" podUID="e4e59234c3093c76c7f2cb7bf3cfed26" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-17" May 9 00:12:29.546754 kubelet[3210]: I0509 00:12:29.546435 3210 topology_manager.go:215] "Topology Admit Handler" podUID="ac5486e862ce0298641057b70c6f16f4" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-17" May 9 00:12:29.546754 kubelet[3210]: I0509 00:12:29.546514 3210 topology_manager.go:215] "Topology Admit Handler" podUID="aba5d8c6014c77edabda99e72e556e00" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-17" May 9 00:12:29.559077 kubelet[3210]: E0509 00:12:29.558943 3210 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-17\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:29.617582 sudo[3244]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:12:29.618373 sudo[3244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:12:29.627300 kubelet[3210]: I0509 00:12:29.627251 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:29.627300 kubelet[3210]: I0509 00:12:29.627298 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aba5d8c6014c77edabda99e72e556e00-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-17\" (UID: \"aba5d8c6014c77edabda99e72e556e00\") " pod="kube-system/kube-scheduler-ip-172-31-17-17" May 9 00:12:29.627451 kubelet[3210]: I0509 00:12:29.627322 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:29.627451 kubelet[3210]: I0509 00:12:29.627337 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4e59234c3093c76c7f2cb7bf3cfed26-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-17\" (UID: \"e4e59234c3093c76c7f2cb7bf3cfed26\") " pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:29.627451 kubelet[3210]: I0509 00:12:29.627356 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4e59234c3093c76c7f2cb7bf3cfed26-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-17\" (UID: \"e4e59234c3093c76c7f2cb7bf3cfed26\") " pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:29.627451 kubelet[3210]: I0509 00:12:29.627373 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:29.627451 kubelet[3210]: I0509 00:12:29.627389 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:29.627755 kubelet[3210]: I0509 00:12:29.627404 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5486e862ce0298641057b70c6f16f4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-17\" (UID: \"ac5486e862ce0298641057b70c6f16f4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-17" May 9 00:12:29.627755 kubelet[3210]: I0509 00:12:29.627423 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4e59234c3093c76c7f2cb7bf3cfed26-ca-certs\") pod \"kube-apiserver-ip-172-31-17-17\" (UID: \"e4e59234c3093c76c7f2cb7bf3cfed26\") " pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:30.388478 kubelet[3210]: I0509 00:12:30.388006 3210 apiserver.go:52] "Watching apiserver" May 9 00:12:30.416493 kubelet[3210]: I0509 00:12:30.416426 3210 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:12:30.452335 sudo[3244]: pam_unix(sudo:session): session closed for user root May 9 00:12:30.501026 kubelet[3210]: E0509 00:12:30.500259 3210 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-17\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-17" May 9 00:12:30.540886 kubelet[3210]: I0509 00:12:30.540819 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-17" podStartSLOduration=1.5407978949999999 podStartE2EDuration="1.540797895s" podCreationTimestamp="2025-05-09 00:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:12:30.521720376 +0000 UTC m=+1.250833016" watchObservedRunningTime="2025-05-09 00:12:30.540797895 +0000 UTC m=+1.269910530" May 9 00:12:30.556420 kubelet[3210]: I0509 00:12:30.556194 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-17" podStartSLOduration=2.5561573600000003 podStartE2EDuration="2.55615736s" podCreationTimestamp="2025-05-09 00:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:12:30.543397508 +0000 UTC m=+1.272510147" watchObservedRunningTime="2025-05-09 00:12:30.55615736 +0000 UTC m=+1.285269999" May 9 00:12:30.557687 kubelet[3210]: I0509 00:12:30.557614 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-17" podStartSLOduration=1.557552439 podStartE2EDuration="1.557552439s" podCreationTimestamp="2025-05-09 00:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:12:30.557258165 +0000 UTC m=+1.286370795" watchObservedRunningTime="2025-05-09 00:12:30.557552439 +0000 UTC m=+1.286665078" May 9 00:12:32.515394 sudo[2223]: pam_unix(sudo:session): session closed for user root May 9 00:12:32.538722 sshd[2222]: Connection closed by 139.178.68.195 port 58208 May 9 00:12:32.540287 sshd-session[2220]: pam_unix(sshd:session): session closed for user core May 9 00:12:32.544463 systemd-logind[1882]: Session 7 logged out. Waiting for processes to exit. May 9 00:12:32.544635 systemd[1]: sshd@6-172.31.17.17:22-139.178.68.195:58208.service: Deactivated successfully. May 9 00:12:32.546437 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:12:32.546596 systemd[1]: session-7.scope: Consumed 5.756s CPU time, 184.4M memory peak, 0B memory swap peak. May 9 00:12:32.547447 systemd-logind[1882]: Removed session 7. May 9 00:12:39.247568 update_engine[1884]: I20250509 00:12:39.247494 1884 update_attempter.cc:509] Updating boot flags... May 9 00:12:39.339037 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3295) May 9 00:12:43.294247 kubelet[3210]: I0509 00:12:43.294217 3210 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:12:43.296889 containerd[1898]: time="2025-05-09T00:12:43.296846583Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:12:43.297921 kubelet[3210]: I0509 00:12:43.297897 3210 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:12:43.380128 kubelet[3210]: I0509 00:12:43.380080 3210 topology_manager.go:215] "Topology Admit Handler" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" podNamespace="kube-system" podName="cilium-jwvs5" May 9 00:12:43.391187 systemd[1]: Created slice kubepods-burstable-pod8814db05_ebc5_44b5_b235_cd6ff9228c57.slice - libcontainer container kubepods-burstable-pod8814db05_ebc5_44b5_b235_cd6ff9228c57.slice. May 9 00:12:43.396952 kubelet[3210]: I0509 00:12:43.396909 3210 topology_manager.go:215] "Topology Admit Handler" podUID="bf7d9115-6e7d-4977-b891-d71d1df91d2e" podNamespace="kube-system" podName="kube-proxy-prggj" May 9 00:12:43.409931 systemd[1]: Created slice kubepods-besteffort-podbf7d9115_6e7d_4977_b891_d71d1df91d2e.slice - libcontainer container kubepods-besteffort-podbf7d9115_6e7d_4977_b891_d71d1df91d2e.slice. May 9 00:12:43.436736 kubelet[3210]: I0509 00:12:43.436698 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cni-path\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.437564 kubelet[3210]: I0509 00:12:43.437365 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-etc-cni-netd\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.437564 kubelet[3210]: I0509 00:12:43.437445 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-net\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.437564 kubelet[3210]: I0509 00:12:43.437478 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsbd\" (UniqueName: \"kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-kube-api-access-pcsbd\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.437564 kubelet[3210]: I0509 00:12:43.437532 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf7d9115-6e7d-4977-b891-d71d1df91d2e-xtables-lock\") pod \"kube-proxy-prggj\" (UID: \"bf7d9115-6e7d-4977-b891-d71d1df91d2e\") " pod="kube-system/kube-proxy-prggj" May 9 00:12:43.439284 kubelet[3210]: I0509 00:12:43.437855 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-xtables-lock\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439284 kubelet[3210]: I0509 00:12:43.437935 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-lib-modules\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439284 kubelet[3210]: I0509 00:12:43.437961 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8814db05-ebc5-44b5-b235-cd6ff9228c57-clustermesh-secrets\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439284 kubelet[3210]: I0509 00:12:43.438065 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf7d9115-6e7d-4977-b891-d71d1df91d2e-kube-proxy\") pod \"kube-proxy-prggj\" (UID: \"bf7d9115-6e7d-4977-b891-d71d1df91d2e\") " pod="kube-system/kube-proxy-prggj" May 9 00:12:43.439284 kubelet[3210]: I0509 00:12:43.438090 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf7d9115-6e7d-4977-b891-d71d1df91d2e-lib-modules\") pod \"kube-proxy-prggj\" (UID: \"bf7d9115-6e7d-4977-b891-d71d1df91d2e\") " pod="kube-system/kube-proxy-prggj" May 9 00:12:43.439284 kubelet[3210]: I0509 00:12:43.438116 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-hubble-tls\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439632 kubelet[3210]: I0509 00:12:43.438137 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-bpf-maps\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439632 kubelet[3210]: I0509 00:12:43.438177 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-config-path\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439632 kubelet[3210]: I0509 00:12:43.438200 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-kernel\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439632 kubelet[3210]: I0509 00:12:43.438238 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-run\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439632 kubelet[3210]: I0509 00:12:43.438262 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dkc\" (UniqueName: \"kubernetes.io/projected/bf7d9115-6e7d-4977-b891-d71d1df91d2e-kube-api-access-25dkc\") pod \"kube-proxy-prggj\" (UID: \"bf7d9115-6e7d-4977-b891-d71d1df91d2e\") " pod="kube-system/kube-proxy-prggj" May 9 00:12:43.439632 kubelet[3210]: I0509 00:12:43.438295 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-hostproc\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.439886 kubelet[3210]: I0509 00:12:43.438317 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-cgroup\") pod \"cilium-jwvs5\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " pod="kube-system/cilium-jwvs5" May 9 00:12:43.529796 kubelet[3210]: I0509 00:12:43.529202 3210 topology_manager.go:215] "Topology Admit Handler" podUID="268f32c3-73ca-4c84-b7d9-51f4983af55d" podNamespace="kube-system" podName="cilium-operator-599987898-tc2wj" May 9 00:12:43.538073 systemd[1]: Created slice kubepods-besteffort-pod268f32c3_73ca_4c84_b7d9_51f4983af55d.slice - libcontainer container kubepods-besteffort-pod268f32c3_73ca_4c84_b7d9_51f4983af55d.slice. May 9 00:12:43.640002 kubelet[3210]: I0509 00:12:43.639543 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6nmh\" (UniqueName: \"kubernetes.io/projected/268f32c3-73ca-4c84-b7d9-51f4983af55d-kube-api-access-s6nmh\") pod \"cilium-operator-599987898-tc2wj\" (UID: \"268f32c3-73ca-4c84-b7d9-51f4983af55d\") " pod="kube-system/cilium-operator-599987898-tc2wj" May 9 00:12:43.640002 kubelet[3210]: I0509 00:12:43.639786 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/268f32c3-73ca-4c84-b7d9-51f4983af55d-cilium-config-path\") pod \"cilium-operator-599987898-tc2wj\" (UID: \"268f32c3-73ca-4c84-b7d9-51f4983af55d\") " pod="kube-system/cilium-operator-599987898-tc2wj" May 9 00:12:43.702067 containerd[1898]: time="2025-05-09T00:12:43.702019237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwvs5,Uid:8814db05-ebc5-44b5-b235-cd6ff9228c57,Namespace:kube-system,Attempt:0,}" May 9 00:12:43.721095 containerd[1898]: time="2025-05-09T00:12:43.720942799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prggj,Uid:bf7d9115-6e7d-4977-b891-d71d1df91d2e,Namespace:kube-system,Attempt:0,}" May 9 00:12:43.734886 containerd[1898]: time="2025-05-09T00:12:43.734285493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:12:43.734886 containerd[1898]: time="2025-05-09T00:12:43.734344935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:12:43.734886 containerd[1898]: time="2025-05-09T00:12:43.734360232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:43.734886 containerd[1898]: time="2025-05-09T00:12:43.734442677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:43.766063 containerd[1898]: time="2025-05-09T00:12:43.765381288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:12:43.766394 systemd[1]: Started cri-containerd-ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce.scope - libcontainer container ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce. May 9 00:12:43.767411 containerd[1898]: time="2025-05-09T00:12:43.767123104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:12:43.767411 containerd[1898]: time="2025-05-09T00:12:43.767310724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:43.767959 containerd[1898]: time="2025-05-09T00:12:43.767600510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:43.791411 systemd[1]: Started cri-containerd-934f21943b3e11cfc1b067f7acf08843cbe58732dfed2ea5eb4228643e339004.scope - libcontainer container 934f21943b3e11cfc1b067f7acf08843cbe58732dfed2ea5eb4228643e339004. May 9 00:12:43.815284 containerd[1898]: time="2025-05-09T00:12:43.815125730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwvs5,Uid:8814db05-ebc5-44b5-b235-cd6ff9228c57,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\"" May 9 00:12:43.817998 containerd[1898]: time="2025-05-09T00:12:43.817608586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:12:43.852790 containerd[1898]: time="2025-05-09T00:12:43.851931557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tc2wj,Uid:268f32c3-73ca-4c84-b7d9-51f4983af55d,Namespace:kube-system,Attempt:0,}" May 9 00:12:43.855329 containerd[1898]: time="2025-05-09T00:12:43.855189534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prggj,Uid:bf7d9115-6e7d-4977-b891-d71d1df91d2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"934f21943b3e11cfc1b067f7acf08843cbe58732dfed2ea5eb4228643e339004\"" May 9 00:12:43.864759 containerd[1898]: time="2025-05-09T00:12:43.864709940Z" level=info msg="CreateContainer within sandbox \"934f21943b3e11cfc1b067f7acf08843cbe58732dfed2ea5eb4228643e339004\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:12:43.889936 containerd[1898]: time="2025-05-09T00:12:43.889713670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:12:43.889936 containerd[1898]: time="2025-05-09T00:12:43.889768879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:12:43.889936 containerd[1898]: time="2025-05-09T00:12:43.889784260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:43.889936 containerd[1898]: time="2025-05-09T00:12:43.889882540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:12:43.911005 containerd[1898]: time="2025-05-09T00:12:43.910885044Z" level=info msg="CreateContainer within sandbox \"934f21943b3e11cfc1b067f7acf08843cbe58732dfed2ea5eb4228643e339004\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c3ca91c8f105750f026f934a5498542223e5e12062c0ebd9d204cc1a1b8eb6a\"" May 9 00:12:43.913179 containerd[1898]: time="2025-05-09T00:12:43.912039768Z" level=info msg="StartContainer for \"1c3ca91c8f105750f026f934a5498542223e5e12062c0ebd9d204cc1a1b8eb6a\"" May 9 00:12:43.913550 systemd[1]: Started cri-containerd-b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df.scope - libcontainer container b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df. May 9 00:12:43.954405 systemd[1]: Started cri-containerd-1c3ca91c8f105750f026f934a5498542223e5e12062c0ebd9d204cc1a1b8eb6a.scope - libcontainer container 1c3ca91c8f105750f026f934a5498542223e5e12062c0ebd9d204cc1a1b8eb6a. May 9 00:12:43.992563 containerd[1898]: time="2025-05-09T00:12:43.992423711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tc2wj,Uid:268f32c3-73ca-4c84-b7d9-51f4983af55d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\"" May 9 00:12:44.013521 containerd[1898]: time="2025-05-09T00:12:44.013474012Z" level=info msg="StartContainer for \"1c3ca91c8f105750f026f934a5498542223e5e12062c0ebd9d204cc1a1b8eb6a\" returns successfully" May 9 00:12:55.115812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030733574.mount: Deactivated successfully. May 9 00:12:57.715484 containerd[1898]: time="2025-05-09T00:12:57.715419782Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:57.719201 containerd[1898]: time="2025-05-09T00:12:57.717647492Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:12:57.719201 containerd[1898]: time="2025-05-09T00:12:57.717749327Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:12:57.720255 containerd[1898]: time="2025-05-09T00:12:57.720220098Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.902565673s" May 9 00:12:57.720255 containerd[1898]: time="2025-05-09T00:12:57.720255472Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:12:57.721481 containerd[1898]: time="2025-05-09T00:12:57.721300840Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:12:57.726978 containerd[1898]: time="2025-05-09T00:12:57.726888120Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:12:57.856018 containerd[1898]: time="2025-05-09T00:12:57.855846148Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\"" May 9 00:12:57.912502 containerd[1898]: time="2025-05-09T00:12:57.912454734Z" level=info msg="StartContainer for \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\"" May 9 00:12:58.152464 systemd[1]: Started cri-containerd-da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe.scope - libcontainer container da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe. May 9 00:12:58.188070 containerd[1898]: time="2025-05-09T00:12:58.188028102Z" level=info msg="StartContainer for \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\" returns successfully" May 9 00:12:58.206949 systemd[1]: cri-containerd-da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe.scope: Deactivated successfully. May 9 00:12:58.472855 containerd[1898]: time="2025-05-09T00:12:58.464032148Z" level=info msg="shim disconnected" id=da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe namespace=k8s.io May 9 00:12:58.472855 containerd[1898]: time="2025-05-09T00:12:58.472765753Z" level=warning msg="cleaning up after shim disconnected" id=da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe namespace=k8s.io May 9 00:12:58.472855 containerd[1898]: time="2025-05-09T00:12:58.472780584Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:12:58.564958 containerd[1898]: time="2025-05-09T00:12:58.564659502Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:12:58.581911 containerd[1898]: time="2025-05-09T00:12:58.581083342Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\"" May 9 00:12:58.583428 containerd[1898]: time="2025-05-09T00:12:58.582857885Z" level=info msg="StartContainer for \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\"" May 9 00:12:58.600648 kubelet[3210]: I0509 00:12:58.598251 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prggj" podStartSLOduration=15.598206903 podStartE2EDuration="15.598206903s" podCreationTimestamp="2025-05-09 00:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:12:44.539449916 +0000 UTC m=+15.268562556" watchObservedRunningTime="2025-05-09 00:12:58.598206903 +0000 UTC m=+29.327319543" May 9 00:12:58.627540 systemd[1]: Started cri-containerd-1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57.scope - libcontainer container 1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57. May 9 00:12:58.658138 containerd[1898]: time="2025-05-09T00:12:58.658087448Z" level=info msg="StartContainer for \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\" returns successfully" May 9 00:12:58.678634 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:12:58.679005 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:12:58.679106 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:12:58.691395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:12:58.691709 systemd[1]: cri-containerd-1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57.scope: Deactivated successfully. May 9 00:12:58.736824 containerd[1898]: time="2025-05-09T00:12:58.736196857Z" level=info msg="shim disconnected" id=1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57 namespace=k8s.io May 9 00:12:58.736824 containerd[1898]: time="2025-05-09T00:12:58.736727716Z" level=warning msg="cleaning up after shim disconnected" id=1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57 namespace=k8s.io May 9 00:12:58.736824 containerd[1898]: time="2025-05-09T00:12:58.736764344Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:12:58.764409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:12:58.852111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe-rootfs.mount: Deactivated successfully. May 9 00:12:59.568989 containerd[1898]: time="2025-05-09T00:12:59.568941226Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:12:59.597614 containerd[1898]: time="2025-05-09T00:12:59.597547654Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\"" May 9 00:12:59.599246 containerd[1898]: time="2025-05-09T00:12:59.598232834Z" level=info msg="StartContainer for \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\"" May 9 00:12:59.633659 systemd[1]: run-containerd-runc-k8s.io-03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0-runc.DyLPU2.mount: Deactivated successfully. May 9 00:12:59.647396 systemd[1]: Started cri-containerd-03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0.scope - libcontainer container 03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0. May 9 00:12:59.682429 containerd[1898]: time="2025-05-09T00:12:59.682365502Z" level=info msg="StartContainer for \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\" returns successfully" May 9 00:12:59.684591 systemd[1]: cri-containerd-03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0.scope: Deactivated successfully. May 9 00:12:59.713635 containerd[1898]: time="2025-05-09T00:12:59.713576491Z" level=info msg="shim disconnected" id=03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0 namespace=k8s.io May 9 00:12:59.713635 containerd[1898]: time="2025-05-09T00:12:59.713627301Z" level=warning msg="cleaning up after shim disconnected" id=03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0 namespace=k8s.io May 9 00:12:59.713635 containerd[1898]: time="2025-05-09T00:12:59.713635538Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:12:59.852626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451857670.mount: Deactivated successfully. May 9 00:12:59.853245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0-rootfs.mount: Deactivated successfully. May 9 00:13:00.575998 containerd[1898]: time="2025-05-09T00:13:00.575938656Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:13:00.597558 containerd[1898]: time="2025-05-09T00:13:00.597508404Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\"" May 9 00:13:00.599298 containerd[1898]: time="2025-05-09T00:13:00.598477491Z" level=info msg="StartContainer for \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\"" May 9 00:13:00.634673 systemd[1]: run-containerd-runc-k8s.io-310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af-runc.nIOckP.mount: Deactivated successfully. May 9 00:13:00.641367 systemd[1]: Started cri-containerd-310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af.scope - libcontainer container 310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af. May 9 00:13:00.676957 systemd[1]: cri-containerd-310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af.scope: Deactivated successfully. May 9 00:13:00.681453 containerd[1898]: time="2025-05-09T00:13:00.681420848Z" level=info msg="StartContainer for \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\" returns successfully" May 9 00:13:00.742891 containerd[1898]: time="2025-05-09T00:13:00.742809323Z" level=info msg="shim disconnected" id=310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af namespace=k8s.io May 9 00:13:00.742891 containerd[1898]: time="2025-05-09T00:13:00.742869205Z" level=warning msg="cleaning up after shim disconnected" id=310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af namespace=k8s.io May 9 00:13:00.742891 containerd[1898]: time="2025-05-09T00:13:00.742877879Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:00.760969 containerd[1898]: time="2025-05-09T00:13:00.760918505Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:13:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:13:00.853007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af-rootfs.mount: Deactivated successfully. May 9 00:13:01.258583 systemd[1]: Started sshd@7-172.31.17.17:22-139.178.68.195:60162.service - OpenSSH per-connection server daemon (139.178.68.195:60162). May 9 00:13:01.500818 sshd[3943]: Accepted publickey for core from 139.178.68.195 port 60162 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:01.503091 sshd-session[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:01.511773 systemd-logind[1882]: New session 8 of user core. May 9 00:13:01.516808 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:13:01.596210 containerd[1898]: time="2025-05-09T00:13:01.596138165Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:13:01.635577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983842761.mount: Deactivated successfully. May 9 00:13:01.642273 containerd[1898]: time="2025-05-09T00:13:01.642218708Z" level=info msg="CreateContainer within sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\"" May 9 00:13:01.646649 containerd[1898]: time="2025-05-09T00:13:01.646288926Z" level=info msg="StartContainer for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\"" May 9 00:13:01.734335 systemd[1]: Started cri-containerd-68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21.scope - libcontainer container 68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21. May 9 00:13:01.892368 containerd[1898]: time="2025-05-09T00:13:01.891543385Z" level=info msg="StartContainer for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" returns successfully" May 9 00:13:02.448316 kubelet[3210]: I0509 00:13:02.448277 3210 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:13:02.519026 kubelet[3210]: I0509 00:13:02.514324 3210 topology_manager.go:215] "Topology Admit Handler" podUID="9d0a548d-aeac-4f79-acad-92b0c2703b1f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-29d6s" May 9 00:13:02.522827 kubelet[3210]: I0509 00:13:02.522767 3210 topology_manager.go:215] "Topology Admit Handler" podUID="db570e12-b031-4970-bd94-c9e80bd27d17" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4n6b5" May 9 00:13:02.558841 systemd[1]: Created slice kubepods-burstable-pod9d0a548d_aeac_4f79_acad_92b0c2703b1f.slice - libcontainer container kubepods-burstable-pod9d0a548d_aeac_4f79_acad_92b0c2703b1f.slice. May 9 00:13:02.589450 systemd[1]: Created slice kubepods-burstable-poddb570e12_b031_4970_bd94_c9e80bd27d17.slice - libcontainer container kubepods-burstable-poddb570e12_b031_4970_bd94_c9e80bd27d17.slice. May 9 00:13:02.636080 kubelet[3210]: I0509 00:13:02.636040 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db570e12-b031-4970-bd94-c9e80bd27d17-config-volume\") pod \"coredns-7db6d8ff4d-4n6b5\" (UID: \"db570e12-b031-4970-bd94-c9e80bd27d17\") " pod="kube-system/coredns-7db6d8ff4d-4n6b5" May 9 00:13:02.637186 kubelet[3210]: I0509 00:13:02.636717 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sq9k\" (UniqueName: \"kubernetes.io/projected/9d0a548d-aeac-4f79-acad-92b0c2703b1f-kube-api-access-2sq9k\") pod \"coredns-7db6d8ff4d-29d6s\" (UID: \"9d0a548d-aeac-4f79-acad-92b0c2703b1f\") " pod="kube-system/coredns-7db6d8ff4d-29d6s" May 9 00:13:02.637186 kubelet[3210]: I0509 00:13:02.636814 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p56wf\" (UniqueName: \"kubernetes.io/projected/db570e12-b031-4970-bd94-c9e80bd27d17-kube-api-access-p56wf\") pod \"coredns-7db6d8ff4d-4n6b5\" (UID: \"db570e12-b031-4970-bd94-c9e80bd27d17\") " pod="kube-system/coredns-7db6d8ff4d-4n6b5" May 9 00:13:02.638593 kubelet[3210]: I0509 00:13:02.638064 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d0a548d-aeac-4f79-acad-92b0c2703b1f-config-volume\") pod \"coredns-7db6d8ff4d-29d6s\" (UID: \"9d0a548d-aeac-4f79-acad-92b0c2703b1f\") " pod="kube-system/coredns-7db6d8ff4d-29d6s" May 9 00:13:02.652055 kubelet[3210]: I0509 00:13:02.651979 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jwvs5" podStartSLOduration=5.747878631 podStartE2EDuration="19.651952908s" podCreationTimestamp="2025-05-09 00:12:43 +0000 UTC" firstStartedPulling="2025-05-09 00:12:43.817042448 +0000 UTC m=+14.546155079" lastFinishedPulling="2025-05-09 00:12:57.721116724 +0000 UTC m=+28.450229356" observedRunningTime="2025-05-09 00:13:02.6505502 +0000 UTC m=+33.379662839" watchObservedRunningTime="2025-05-09 00:13:02.651952908 +0000 UTC m=+33.381065548" May 9 00:13:02.881729 containerd[1898]: time="2025-05-09T00:13:02.881686157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29d6s,Uid:9d0a548d-aeac-4f79-acad-92b0c2703b1f,Namespace:kube-system,Attempt:0,}" May 9 00:13:02.901124 containerd[1898]: time="2025-05-09T00:13:02.900153605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4n6b5,Uid:db570e12-b031-4970-bd94-c9e80bd27d17,Namespace:kube-system,Attempt:0,}" May 9 00:13:02.970415 sshd[3949]: Connection closed by 139.178.68.195 port 60162 May 9 00:13:02.971742 sshd-session[3943]: pam_unix(sshd:session): session closed for user core May 9 00:13:02.980993 systemd[1]: sshd@7-172.31.17.17:22-139.178.68.195:60162.service: Deactivated successfully. May 9 00:13:02.982152 systemd-logind[1882]: Session 8 logged out. Waiting for processes to exit. May 9 00:13:02.991946 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:13:02.995707 systemd-logind[1882]: Removed session 8. May 9 00:13:03.347767 containerd[1898]: time="2025-05-09T00:13:03.347649255Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:03.349936 containerd[1898]: time="2025-05-09T00:13:03.349795305Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:13:03.353064 containerd[1898]: time="2025-05-09T00:13:03.352937572Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:03.356089 containerd[1898]: time="2025-05-09T00:13:03.355999110Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.634665118s" May 9 00:13:03.356252 containerd[1898]: time="2025-05-09T00:13:03.356096049Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:13:03.360046 containerd[1898]: time="2025-05-09T00:13:03.359695270Z" level=info msg="CreateContainer within sandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:13:03.390946 containerd[1898]: time="2025-05-09T00:13:03.390910549Z" level=info msg="CreateContainer within sandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\"" May 9 00:13:03.391950 containerd[1898]: time="2025-05-09T00:13:03.391884995Z" level=info msg="StartContainer for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\"" May 9 00:13:03.449930 systemd[1]: Started cri-containerd-50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd.scope - libcontainer container 50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd. May 9 00:13:03.513957 containerd[1898]: time="2025-05-09T00:13:03.513908323Z" level=info msg="StartContainer for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" returns successfully" May 9 00:13:03.911094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483955120.mount: Deactivated successfully. May 9 00:13:07.280196 systemd-networkd[1815]: cilium_host: Link UP May 9 00:13:07.283232 systemd-networkd[1815]: cilium_net: Link UP May 9 00:13:07.284608 systemd-networkd[1815]: cilium_net: Gained carrier May 9 00:13:07.284934 systemd-networkd[1815]: cilium_host: Gained carrier May 9 00:13:07.285044 systemd-networkd[1815]: cilium_host: Gained IPv6LL May 9 00:13:07.288242 (udev-worker)[4150]: Network interface NamePolicy= disabled on kernel command line. May 9 00:13:07.288977 (udev-worker)[4151]: Network interface NamePolicy= disabled on kernel command line. May 9 00:13:07.440262 (udev-worker)[4161]: Network interface NamePolicy= disabled on kernel command line. May 9 00:13:07.452872 systemd-networkd[1815]: cilium_vxlan: Link UP May 9 00:13:07.453057 systemd-networkd[1815]: cilium_vxlan: Gained carrier May 9 00:13:07.844404 systemd-networkd[1815]: cilium_net: Gained IPv6LL May 9 00:13:08.008757 systemd[1]: Started sshd@8-172.31.17.17:22-139.178.68.195:48000.service - OpenSSH per-connection server daemon (139.178.68.195:48000). May 9 00:13:08.090206 kernel: NET: Registered PF_ALG protocol family May 9 00:13:08.274194 sshd[4246]: Accepted publickey for core from 139.178.68.195 port 48000 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:08.276514 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:08.297321 systemd-logind[1882]: New session 9 of user core. May 9 00:13:08.302723 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:13:08.782630 sshd[4261]: Connection closed by 139.178.68.195 port 48000 May 9 00:13:08.783638 sshd-session[4246]: pam_unix(sshd:session): session closed for user core May 9 00:13:08.790618 systemd[1]: sshd@8-172.31.17.17:22-139.178.68.195:48000.service: Deactivated successfully. May 9 00:13:08.790865 systemd-logind[1882]: Session 9 logged out. Waiting for processes to exit. May 9 00:13:08.793692 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:13:08.797705 systemd-logind[1882]: Removed session 9. May 9 00:13:08.804484 systemd-networkd[1815]: cilium_vxlan: Gained IPv6LL May 9 00:13:08.958820 systemd-networkd[1815]: lxc_health: Link UP May 9 00:13:08.970947 systemd-networkd[1815]: lxc_health: Gained carrier May 9 00:13:09.579307 systemd-networkd[1815]: lxce3a80f278590: Link UP May 9 00:13:09.585502 kernel: eth0: renamed from tmp9219f May 9 00:13:09.593514 systemd-networkd[1815]: lxce3a80f278590: Gained carrier May 9 00:13:09.593821 systemd-networkd[1815]: lxc8c11f29148a2: Link UP May 9 00:13:09.596449 (udev-worker)[4163]: Network interface NamePolicy= disabled on kernel command line. May 9 00:13:09.607222 kernel: eth0: renamed from tmpda07d May 9 00:13:09.618850 systemd-networkd[1815]: lxc8c11f29148a2: Gained carrier May 9 00:13:09.737581 kubelet[3210]: I0509 00:13:09.737196 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-tc2wj" podStartSLOduration=7.377235306 podStartE2EDuration="26.73715344s" podCreationTimestamp="2025-05-09 00:12:43 +0000 UTC" firstStartedPulling="2025-05-09 00:12:43.997489171 +0000 UTC m=+14.726601798" lastFinishedPulling="2025-05-09 00:13:03.357407307 +0000 UTC m=+34.086519932" observedRunningTime="2025-05-09 00:13:03.624184455 +0000 UTC m=+34.353297092" watchObservedRunningTime="2025-05-09 00:13:09.73715344 +0000 UTC m=+40.466266080" May 9 00:13:10.532338 systemd-networkd[1815]: lxc_health: Gained IPv6LL May 9 00:13:11.044333 systemd-networkd[1815]: lxce3a80f278590: Gained IPv6LL May 9 00:13:11.300312 systemd-networkd[1815]: lxc8c11f29148a2: Gained IPv6LL May 9 00:13:13.824144 systemd[1]: Started sshd@9-172.31.17.17:22-139.178.68.195:48016.service - OpenSSH per-connection server daemon (139.178.68.195:48016). May 9 00:13:14.038715 sshd[4537]: Accepted publickey for core from 139.178.68.195 port 48016 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:14.040989 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:14.049273 systemd-logind[1882]: New session 10 of user core. May 9 00:13:14.061425 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:13:14.324437 containerd[1898]: time="2025-05-09T00:13:14.324236781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:14.324437 containerd[1898]: time="2025-05-09T00:13:14.324326896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:14.324437 containerd[1898]: time="2025-05-09T00:13:14.324345402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:14.326312 containerd[1898]: time="2025-05-09T00:13:14.324493543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:14.357606 containerd[1898]: time="2025-05-09T00:13:14.357354013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:14.359232 containerd[1898]: time="2025-05-09T00:13:14.359138381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:14.359605 containerd[1898]: time="2025-05-09T00:13:14.359420048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:14.360449 containerd[1898]: time="2025-05-09T00:13:14.360362542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:14.419306 systemd[1]: Started cri-containerd-9219f01dd1a0987683835f0a44016df131e1c315b7416e4f3b075010f166f653.scope - libcontainer container 9219f01dd1a0987683835f0a44016df131e1c315b7416e4f3b075010f166f653. May 9 00:13:14.453943 systemd[1]: Started cri-containerd-da07dcd68ea3191d67d0789c12071c2accc45f08e12445107a4f206d834fa23b.scope - libcontainer container da07dcd68ea3191d67d0789c12071c2accc45f08e12445107a4f206d834fa23b. May 9 00:13:14.492815 sshd[4541]: Connection closed by 139.178.68.195 port 48016 May 9 00:13:14.494622 sshd-session[4537]: pam_unix(sshd:session): session closed for user core May 9 00:13:14.503708 systemd[1]: sshd@9-172.31.17.17:22-139.178.68.195:48016.service: Deactivated successfully. May 9 00:13:14.510912 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:13:14.513607 systemd-logind[1882]: Session 10 logged out. Waiting for processes to exit. May 9 00:13:14.517421 systemd-logind[1882]: Removed session 10. May 9 00:13:14.560648 containerd[1898]: time="2025-05-09T00:13:14.560604703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4n6b5,Uid:db570e12-b031-4970-bd94-c9e80bd27d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"9219f01dd1a0987683835f0a44016df131e1c315b7416e4f3b075010f166f653\"" May 9 00:13:14.567368 containerd[1898]: time="2025-05-09T00:13:14.567327521Z" level=info msg="CreateContainer within sandbox \"9219f01dd1a0987683835f0a44016df131e1c315b7416e4f3b075010f166f653\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:13:14.616041 containerd[1898]: time="2025-05-09T00:13:14.615921851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29d6s,Uid:9d0a548d-aeac-4f79-acad-92b0c2703b1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"da07dcd68ea3191d67d0789c12071c2accc45f08e12445107a4f206d834fa23b\"" May 9 00:13:14.621572 containerd[1898]: time="2025-05-09T00:13:14.621509008Z" level=info msg="CreateContainer within sandbox \"9219f01dd1a0987683835f0a44016df131e1c315b7416e4f3b075010f166f653\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bb41929bc49188fd1fe94001046b575c7dbe314b3cf3a07e40afe0500cfd952\"" May 9 00:13:14.622193 containerd[1898]: time="2025-05-09T00:13:14.622029856Z" level=info msg="CreateContainer within sandbox \"da07dcd68ea3191d67d0789c12071c2accc45f08e12445107a4f206d834fa23b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:13:14.622193 containerd[1898]: time="2025-05-09T00:13:14.622080839Z" level=info msg="StartContainer for \"5bb41929bc49188fd1fe94001046b575c7dbe314b3cf3a07e40afe0500cfd952\"" May 9 00:13:14.663203 containerd[1898]: time="2025-05-09T00:13:14.660556060Z" level=info msg="CreateContainer within sandbox \"da07dcd68ea3191d67d0789c12071c2accc45f08e12445107a4f206d834fa23b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f53ca154dc3501f96dec9d0eff03cc596137456375acda4d773806446b7c3be5\"" May 9 00:13:14.664374 containerd[1898]: time="2025-05-09T00:13:14.663852802Z" level=info msg="StartContainer for \"f53ca154dc3501f96dec9d0eff03cc596137456375acda4d773806446b7c3be5\"" May 9 00:13:14.695629 systemd[1]: Started cri-containerd-5bb41929bc49188fd1fe94001046b575c7dbe314b3cf3a07e40afe0500cfd952.scope - libcontainer container 5bb41929bc49188fd1fe94001046b575c7dbe314b3cf3a07e40afe0500cfd952. May 9 00:13:14.731634 systemd[1]: Started cri-containerd-f53ca154dc3501f96dec9d0eff03cc596137456375acda4d773806446b7c3be5.scope - libcontainer container f53ca154dc3501f96dec9d0eff03cc596137456375acda4d773806446b7c3be5. May 9 00:13:14.784202 containerd[1898]: time="2025-05-09T00:13:14.784036909Z" level=info msg="StartContainer for \"5bb41929bc49188fd1fe94001046b575c7dbe314b3cf3a07e40afe0500cfd952\" returns successfully" May 9 00:13:14.784202 containerd[1898]: time="2025-05-09T00:13:14.784094077Z" level=info msg="StartContainer for \"f53ca154dc3501f96dec9d0eff03cc596137456375acda4d773806446b7c3be5\" returns successfully" May 9 00:13:15.334255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596981749.mount: Deactivated successfully. May 9 00:13:15.712872 kubelet[3210]: I0509 00:13:15.712323 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-29d6s" podStartSLOduration=32.712302734 podStartE2EDuration="32.712302734s" podCreationTimestamp="2025-05-09 00:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:13:15.697102931 +0000 UTC m=+46.426215571" watchObservedRunningTime="2025-05-09 00:13:15.712302734 +0000 UTC m=+46.441415365" May 9 00:13:15.732261 kubelet[3210]: I0509 00:13:15.732190 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4n6b5" podStartSLOduration=32.732153744 podStartE2EDuration="32.732153744s" podCreationTimestamp="2025-05-09 00:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:13:15.713440597 +0000 UTC m=+46.442553237" watchObservedRunningTime="2025-05-09 00:13:15.732153744 +0000 UTC m=+46.461266388" May 9 00:13:17.297748 ntpd[1874]: Listen normally on 8 cilium_host 192.168.0.236:123 May 9 00:13:17.297828 ntpd[1874]: Listen normally on 9 cilium_net [fe80::a448:28ff:feb9:4c%4]:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 8 cilium_host 192.168.0.236:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 9 cilium_net [fe80::a448:28ff:feb9:4c%4]:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 10 cilium_host [fe80::68ea:b8ff:fe70:f71d%5]:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 11 cilium_vxlan [fe80::58b3:25ff:fe56:50f9%6]:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 12 lxc_health [fe80::f8bf:7fff:fe8c:74a7%8]:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 13 lxce3a80f278590 [fe80::860:9cff:fef5:7aa3%10]:123 May 9 00:13:17.298298 ntpd[1874]: 9 May 00:13:17 ntpd[1874]: Listen normally on 14 lxc8c11f29148a2 [fe80::c9b:a0ff:fe57:ddfc%12]:123 May 9 00:13:17.297877 ntpd[1874]: Listen normally on 10 cilium_host [fe80::68ea:b8ff:fe70:f71d%5]:123 May 9 00:13:17.297912 ntpd[1874]: Listen normally on 11 cilium_vxlan [fe80::58b3:25ff:fe56:50f9%6]:123 May 9 00:13:17.297995 ntpd[1874]: Listen normally on 12 lxc_health [fe80::f8bf:7fff:fe8c:74a7%8]:123 May 9 00:13:17.298026 ntpd[1874]: Listen normally on 13 lxce3a80f278590 [fe80::860:9cff:fef5:7aa3%10]:123 May 9 00:13:17.298057 ntpd[1874]: Listen normally on 14 lxc8c11f29148a2 [fe80::c9b:a0ff:fe57:ddfc%12]:123 May 9 00:13:19.532577 systemd[1]: Started sshd@10-172.31.17.17:22-139.178.68.195:57148.service - OpenSSH per-connection server daemon (139.178.68.195:57148). May 9 00:13:19.740871 sshd[4724]: Accepted publickey for core from 139.178.68.195 port 57148 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:19.746645 sshd-session[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:19.765143 systemd-logind[1882]: New session 11 of user core. May 9 00:13:19.767819 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:13:20.279540 sshd[4728]: Connection closed by 139.178.68.195 port 57148 May 9 00:13:20.280420 sshd-session[4724]: pam_unix(sshd:session): session closed for user core May 9 00:13:20.284622 systemd[1]: sshd@10-172.31.17.17:22-139.178.68.195:57148.service: Deactivated successfully. May 9 00:13:20.287020 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:13:20.288237 systemd-logind[1882]: Session 11 logged out. Waiting for processes to exit. May 9 00:13:20.290003 systemd-logind[1882]: Removed session 11. May 9 00:13:20.318550 systemd[1]: Started sshd@11-172.31.17.17:22-139.178.68.195:57152.service - OpenSSH per-connection server daemon (139.178.68.195:57152). May 9 00:13:20.476308 sshd[4740]: Accepted publickey for core from 139.178.68.195 port 57152 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:20.478917 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:20.484713 systemd-logind[1882]: New session 12 of user core. May 9 00:13:20.493398 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:13:20.755759 sshd[4742]: Connection closed by 139.178.68.195 port 57152 May 9 00:13:20.756797 sshd-session[4740]: pam_unix(sshd:session): session closed for user core May 9 00:13:20.761897 systemd[1]: sshd@11-172.31.17.17:22-139.178.68.195:57152.service: Deactivated successfully. May 9 00:13:20.767517 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:13:20.770949 systemd-logind[1882]: Session 12 logged out. Waiting for processes to exit. May 9 00:13:20.772764 systemd-logind[1882]: Removed session 12. May 9 00:13:20.788512 systemd[1]: Started sshd@12-172.31.17.17:22-139.178.68.195:57154.service - OpenSSH per-connection server daemon (139.178.68.195:57154). May 9 00:13:20.984649 sshd[4751]: Accepted publickey for core from 139.178.68.195 port 57154 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:20.989798 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:20.995296 systemd-logind[1882]: New session 13 of user core. May 9 00:13:21.001729 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:13:21.262937 sshd[4753]: Connection closed by 139.178.68.195 port 57154 May 9 00:13:21.264273 sshd-session[4751]: pam_unix(sshd:session): session closed for user core May 9 00:13:21.270020 systemd-logind[1882]: Session 13 logged out. Waiting for processes to exit. May 9 00:13:21.270952 systemd[1]: sshd@12-172.31.17.17:22-139.178.68.195:57154.service: Deactivated successfully. May 9 00:13:21.273139 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:13:21.274739 systemd-logind[1882]: Removed session 13. May 9 00:13:26.299120 systemd[1]: Started sshd@13-172.31.17.17:22-139.178.68.195:42308.service - OpenSSH per-connection server daemon (139.178.68.195:42308). May 9 00:13:26.483911 sshd[4765]: Accepted publickey for core from 139.178.68.195 port 42308 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:26.485507 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:26.491146 systemd-logind[1882]: New session 14 of user core. May 9 00:13:26.498412 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:13:26.695977 sshd[4767]: Connection closed by 139.178.68.195 port 42308 May 9 00:13:26.696793 sshd-session[4765]: pam_unix(sshd:session): session closed for user core May 9 00:13:26.701508 systemd-logind[1882]: Session 14 logged out. Waiting for processes to exit. May 9 00:13:26.702443 systemd[1]: sshd@13-172.31.17.17:22-139.178.68.195:42308.service: Deactivated successfully. May 9 00:13:26.705457 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:13:26.706758 systemd-logind[1882]: Removed session 14. May 9 00:13:31.732569 systemd[1]: Started sshd@14-172.31.17.17:22-139.178.68.195:42314.service - OpenSSH per-connection server daemon (139.178.68.195:42314). May 9 00:13:31.896815 sshd[4780]: Accepted publickey for core from 139.178.68.195 port 42314 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:31.898270 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:31.903860 systemd-logind[1882]: New session 15 of user core. May 9 00:13:31.912412 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:13:32.096452 sshd[4782]: Connection closed by 139.178.68.195 port 42314 May 9 00:13:32.098009 sshd-session[4780]: pam_unix(sshd:session): session closed for user core May 9 00:13:32.100913 systemd[1]: sshd@14-172.31.17.17:22-139.178.68.195:42314.service: Deactivated successfully. May 9 00:13:32.103252 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:13:32.105245 systemd-logind[1882]: Session 15 logged out. Waiting for processes to exit. May 9 00:13:32.106938 systemd-logind[1882]: Removed session 15. May 9 00:13:32.132596 systemd[1]: Started sshd@15-172.31.17.17:22-139.178.68.195:42324.service - OpenSSH per-connection server daemon (139.178.68.195:42324). May 9 00:13:32.290478 sshd[4792]: Accepted publickey for core from 139.178.68.195 port 42324 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:32.292119 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:32.297004 systemd-logind[1882]: New session 16 of user core. May 9 00:13:32.302389 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:13:36.487226 sshd[4794]: Connection closed by 139.178.68.195 port 42324 May 9 00:13:36.488641 sshd-session[4792]: pam_unix(sshd:session): session closed for user core May 9 00:13:36.495977 systemd-logind[1882]: Session 16 logged out. Waiting for processes to exit. May 9 00:13:36.496105 systemd[1]: sshd@15-172.31.17.17:22-139.178.68.195:42324.service: Deactivated successfully. May 9 00:13:36.498715 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:13:36.499847 systemd-logind[1882]: Removed session 16. May 9 00:13:36.523670 systemd[1]: Started sshd@16-172.31.17.17:22-139.178.68.195:33660.service - OpenSSH per-connection server daemon (139.178.68.195:33660). May 9 00:13:36.710204 sshd[4803]: Accepted publickey for core from 139.178.68.195 port 33660 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:36.711770 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:36.717398 systemd-logind[1882]: New session 17 of user core. May 9 00:13:36.720359 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:13:38.510733 sshd[4805]: Connection closed by 139.178.68.195 port 33660 May 9 00:13:38.510069 sshd-session[4803]: pam_unix(sshd:session): session closed for user core May 9 00:13:38.515336 systemd[1]: sshd@16-172.31.17.17:22-139.178.68.195:33660.service: Deactivated successfully. May 9 00:13:38.517107 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:13:38.521616 systemd-logind[1882]: Session 17 logged out. Waiting for processes to exit. May 9 00:13:38.523549 systemd-logind[1882]: Removed session 17. May 9 00:13:38.545502 systemd[1]: Started sshd@17-172.31.17.17:22-139.178.68.195:33670.service - OpenSSH per-connection server daemon (139.178.68.195:33670). May 9 00:13:38.712078 sshd[4822]: Accepted publickey for core from 139.178.68.195 port 33670 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:38.713695 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:38.721996 systemd-logind[1882]: New session 18 of user core. May 9 00:13:38.730417 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:13:39.533152 sshd[4824]: Connection closed by 139.178.68.195 port 33670 May 9 00:13:39.533016 sshd-session[4822]: pam_unix(sshd:session): session closed for user core May 9 00:13:39.538064 systemd[1]: sshd@17-172.31.17.17:22-139.178.68.195:33670.service: Deactivated successfully. May 9 00:13:39.540595 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:13:39.541537 systemd-logind[1882]: Session 18 logged out. Waiting for processes to exit. May 9 00:13:39.542802 systemd-logind[1882]: Removed session 18. May 9 00:13:39.568483 systemd[1]: Started sshd@18-172.31.17.17:22-139.178.68.195:33678.service - OpenSSH per-connection server daemon (139.178.68.195:33678). May 9 00:13:39.732814 sshd[4833]: Accepted publickey for core from 139.178.68.195 port 33678 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:39.734288 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:39.739838 systemd-logind[1882]: New session 19 of user core. May 9 00:13:39.743393 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:13:39.967340 sshd[4835]: Connection closed by 139.178.68.195 port 33678 May 9 00:13:39.968984 sshd-session[4833]: pam_unix(sshd:session): session closed for user core May 9 00:13:39.972412 systemd[1]: sshd@18-172.31.17.17:22-139.178.68.195:33678.service: Deactivated successfully. May 9 00:13:39.974753 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:13:39.976444 systemd-logind[1882]: Session 19 logged out. Waiting for processes to exit. May 9 00:13:39.978054 systemd-logind[1882]: Removed session 19. May 9 00:13:45.000650 systemd[1]: Started sshd@19-172.31.17.17:22-139.178.68.195:33682.service - OpenSSH per-connection server daemon (139.178.68.195:33682). May 9 00:13:45.185546 sshd[4853]: Accepted publickey for core from 139.178.68.195 port 33682 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:45.186325 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:45.191228 systemd-logind[1882]: New session 20 of user core. May 9 00:13:45.196398 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:13:45.389542 sshd[4855]: Connection closed by 139.178.68.195 port 33682 May 9 00:13:45.391311 sshd-session[4853]: pam_unix(sshd:session): session closed for user core May 9 00:13:45.395522 systemd-logind[1882]: Session 20 logged out. Waiting for processes to exit. May 9 00:13:45.396495 systemd[1]: sshd@19-172.31.17.17:22-139.178.68.195:33682.service: Deactivated successfully. May 9 00:13:45.399033 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:13:45.400274 systemd-logind[1882]: Removed session 20. May 9 00:13:50.430568 systemd[1]: Started sshd@20-172.31.17.17:22-139.178.68.195:34386.service - OpenSSH per-connection server daemon (139.178.68.195:34386). May 9 00:13:50.605362 sshd[4866]: Accepted publickey for core from 139.178.68.195 port 34386 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:50.606964 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:50.613061 systemd-logind[1882]: New session 21 of user core. May 9 00:13:50.618638 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:13:50.804533 sshd[4868]: Connection closed by 139.178.68.195 port 34386 May 9 00:13:50.806027 sshd-session[4866]: pam_unix(sshd:session): session closed for user core May 9 00:13:50.809374 systemd[1]: sshd@20-172.31.17.17:22-139.178.68.195:34386.service: Deactivated successfully. May 9 00:13:50.811868 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:13:50.812953 systemd-logind[1882]: Session 21 logged out. Waiting for processes to exit. May 9 00:13:50.814137 systemd-logind[1882]: Removed session 21. May 9 00:13:55.840829 systemd[1]: Started sshd@21-172.31.17.17:22-139.178.68.195:44680.service - OpenSSH per-connection server daemon (139.178.68.195:44680). May 9 00:13:56.016811 sshd[4880]: Accepted publickey for core from 139.178.68.195 port 44680 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:56.026817 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:56.032933 systemd-logind[1882]: New session 22 of user core. May 9 00:13:56.038414 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:13:56.228936 sshd[4882]: Connection closed by 139.178.68.195 port 44680 May 9 00:13:56.229571 sshd-session[4880]: pam_unix(sshd:session): session closed for user core May 9 00:13:56.233495 systemd-logind[1882]: Session 22 logged out. Waiting for processes to exit. May 9 00:13:56.234725 systemd[1]: sshd@21-172.31.17.17:22-139.178.68.195:44680.service: Deactivated successfully. May 9 00:13:56.241371 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:13:56.242514 systemd-logind[1882]: Removed session 22. May 9 00:13:56.282548 systemd[1]: Started sshd@22-172.31.17.17:22-139.178.68.195:44682.service - OpenSSH per-connection server daemon (139.178.68.195:44682). May 9 00:13:56.443498 sshd[4893]: Accepted publickey for core from 139.178.68.195 port 44682 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:13:56.445550 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:56.450950 systemd-logind[1882]: New session 23 of user core. May 9 00:13:56.455391 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:13:57.990136 containerd[1898]: time="2025-05-09T00:13:57.990003768Z" level=info msg="StopContainer for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" with timeout 30 (s)" May 9 00:13:58.010938 containerd[1898]: time="2025-05-09T00:13:58.010076103Z" level=info msg="Stop container \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" with signal terminated" May 9 00:13:58.068007 systemd[1]: run-containerd-runc-k8s.io-68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21-runc.W4aVSa.mount: Deactivated successfully. May 9 00:13:58.075315 systemd[1]: cri-containerd-50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd.scope: Deactivated successfully. May 9 00:13:58.102083 containerd[1898]: time="2025-05-09T00:13:58.101712825Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:13:58.112563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd-rootfs.mount: Deactivated successfully. May 9 00:13:58.118019 containerd[1898]: time="2025-05-09T00:13:58.117979099Z" level=info msg="StopContainer for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" with timeout 2 (s)" May 9 00:13:58.118512 containerd[1898]: time="2025-05-09T00:13:58.118484060Z" level=info msg="Stop container \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" with signal terminated" May 9 00:13:58.128438 systemd-networkd[1815]: lxc_health: Link DOWN May 9 00:13:58.128448 systemd-networkd[1815]: lxc_health: Lost carrier May 9 00:13:58.133947 containerd[1898]: time="2025-05-09T00:13:58.133860514Z" level=info msg="shim disconnected" id=50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd namespace=k8s.io May 9 00:13:58.133947 containerd[1898]: time="2025-05-09T00:13:58.133935255Z" level=warning msg="cleaning up after shim disconnected" id=50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd namespace=k8s.io May 9 00:13:58.133947 containerd[1898]: time="2025-05-09T00:13:58.133946669Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:58.165851 containerd[1898]: time="2025-05-09T00:13:58.165728051Z" level=info msg="StopContainer for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" returns successfully" May 9 00:13:58.171911 systemd[1]: cri-containerd-68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21.scope: Deactivated successfully. May 9 00:13:58.172224 systemd[1]: cri-containerd-68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21.scope: Consumed 8.333s CPU time. May 9 00:13:58.177093 containerd[1898]: time="2025-05-09T00:13:58.177059358Z" level=info msg="StopPodSandbox for \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\"" May 9 00:13:58.183131 containerd[1898]: time="2025-05-09T00:13:58.182954156Z" level=info msg="Container to stop \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:13:58.187463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df-shm.mount: Deactivated successfully. May 9 00:13:58.210281 systemd[1]: cri-containerd-b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df.scope: Deactivated successfully. May 9 00:13:58.248227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21-rootfs.mount: Deactivated successfully. May 9 00:13:58.268419 containerd[1898]: time="2025-05-09T00:13:58.268333768Z" level=info msg="shim disconnected" id=b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df namespace=k8s.io May 9 00:13:58.268419 containerd[1898]: time="2025-05-09T00:13:58.268396175Z" level=warning msg="cleaning up after shim disconnected" id=b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df namespace=k8s.io May 9 00:13:58.268419 containerd[1898]: time="2025-05-09T00:13:58.268404990Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:58.270682 containerd[1898]: time="2025-05-09T00:13:58.270407457Z" level=info msg="shim disconnected" id=68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21 namespace=k8s.io May 9 00:13:58.270682 containerd[1898]: time="2025-05-09T00:13:58.270455682Z" level=warning msg="cleaning up after shim disconnected" id=68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21 namespace=k8s.io May 9 00:13:58.270682 containerd[1898]: time="2025-05-09T00:13:58.270463974Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:58.288899 containerd[1898]: time="2025-05-09T00:13:58.288850062Z" level=info msg="TearDown network for sandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" successfully" May 9 00:13:58.288899 containerd[1898]: time="2025-05-09T00:13:58.288889940Z" level=info msg="StopPodSandbox for \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" returns successfully" May 9 00:13:58.289682 containerd[1898]: time="2025-05-09T00:13:58.289560144Z" level=info msg="StopContainer for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" returns successfully" May 9 00:13:58.290634 containerd[1898]: time="2025-05-09T00:13:58.290591702Z" level=info msg="StopPodSandbox for \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\"" May 9 00:13:58.290764 containerd[1898]: time="2025-05-09T00:13:58.290647322Z" level=info msg="Container to stop \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:13:58.290764 containerd[1898]: time="2025-05-09T00:13:58.290692004Z" level=info msg="Container to stop \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:13:58.290764 containerd[1898]: time="2025-05-09T00:13:58.290705938Z" level=info msg="Container to stop \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:13:58.290764 containerd[1898]: time="2025-05-09T00:13:58.290718901Z" level=info msg="Container to stop \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:13:58.290764 containerd[1898]: time="2025-05-09T00:13:58.290731026Z" level=info msg="Container to stop \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:13:58.302406 systemd[1]: cri-containerd-ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce.scope: Deactivated successfully. May 9 00:13:58.344840 containerd[1898]: time="2025-05-09T00:13:58.344773079Z" level=info msg="shim disconnected" id=ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce namespace=k8s.io May 9 00:13:58.344840 containerd[1898]: time="2025-05-09T00:13:58.344834511Z" level=warning msg="cleaning up after shim disconnected" id=ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce namespace=k8s.io May 9 00:13:58.345273 containerd[1898]: time="2025-05-09T00:13:58.344856057Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:58.361908 containerd[1898]: time="2025-05-09T00:13:58.361767128Z" level=info msg="TearDown network for sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" successfully" May 9 00:13:58.361908 containerd[1898]: time="2025-05-09T00:13:58.361807038Z" level=info msg="StopPodSandbox for \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" returns successfully" May 9 00:13:58.445440 kubelet[3210]: I0509 00:13:58.445385 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-bpf-maps\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445440 kubelet[3210]: I0509 00:13:58.445442 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-config-path\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445870 kubelet[3210]: I0509 00:13:58.445464 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-run\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445870 kubelet[3210]: I0509 00:13:58.445481 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-hostproc\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445870 kubelet[3210]: I0509 00:13:58.445497 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-etc-cni-netd\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445870 kubelet[3210]: I0509 00:13:58.445525 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8814db05-ebc5-44b5-b235-cd6ff9228c57-clustermesh-secrets\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445870 kubelet[3210]: I0509 00:13:58.445539 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-kernel\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.445870 kubelet[3210]: I0509 00:13:58.445565 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/268f32c3-73ca-4c84-b7d9-51f4983af55d-cilium-config-path\") pod \"268f32c3-73ca-4c84-b7d9-51f4983af55d\" (UID: \"268f32c3-73ca-4c84-b7d9-51f4983af55d\") " May 9 00:13:58.446106 kubelet[3210]: I0509 00:13:58.445580 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-cgroup\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.446106 kubelet[3210]: I0509 00:13:58.445711 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-hubble-tls\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.446106 kubelet[3210]: I0509 00:13:58.445726 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cni-path\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.446106 kubelet[3210]: I0509 00:13:58.445742 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-xtables-lock\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.446106 kubelet[3210]: I0509 00:13:58.445758 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6nmh\" (UniqueName: \"kubernetes.io/projected/268f32c3-73ca-4c84-b7d9-51f4983af55d-kube-api-access-s6nmh\") pod \"268f32c3-73ca-4c84-b7d9-51f4983af55d\" (UID: \"268f32c3-73ca-4c84-b7d9-51f4983af55d\") " May 9 00:13:58.446106 kubelet[3210]: I0509 00:13:58.445777 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsbd\" (UniqueName: \"kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-kube-api-access-pcsbd\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.446304 kubelet[3210]: I0509 00:13:58.445792 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-lib-modules\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.446304 kubelet[3210]: I0509 00:13:58.445806 3210 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-net\") pod \"8814db05-ebc5-44b5-b235-cd6ff9228c57\" (UID: \"8814db05-ebc5-44b5-b235-cd6ff9228c57\") " May 9 00:13:58.447782 kubelet[3210]: I0509 00:13:58.445993 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.447782 kubelet[3210]: I0509 00:13:58.445917 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.458192 kubelet[3210]: I0509 00:13:58.458126 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.458386 kubelet[3210]: I0509 00:13:58.458371 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-hostproc" (OuterVolumeSpecName: "hostproc") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.458791 kubelet[3210]: I0509 00:13:58.458489 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.461073 kubelet[3210]: I0509 00:13:58.461025 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cni-path" (OuterVolumeSpecName: "cni-path") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.461296 kubelet[3210]: I0509 00:13:58.461107 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.462057 kubelet[3210]: I0509 00:13:58.462019 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.468255 kubelet[3210]: I0509 00:13:58.467501 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.468255 kubelet[3210]: I0509 00:13:58.467568 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:13:58.469638 kubelet[3210]: I0509 00:13:58.469594 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-kube-api-access-pcsbd" (OuterVolumeSpecName: "kube-api-access-pcsbd") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "kube-api-access-pcsbd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:13:58.470631 kubelet[3210]: I0509 00:13:58.469612 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8814db05-ebc5-44b5-b235-cd6ff9228c57-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 00:13:58.470775 kubelet[3210]: I0509 00:13:58.469665 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268f32c3-73ca-4c84-b7d9-51f4983af55d-kube-api-access-s6nmh" (OuterVolumeSpecName: "kube-api-access-s6nmh") pod "268f32c3-73ca-4c84-b7d9-51f4983af55d" (UID: "268f32c3-73ca-4c84-b7d9-51f4983af55d"). InnerVolumeSpecName "kube-api-access-s6nmh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:13:58.471085 kubelet[3210]: I0509 00:13:58.471058 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:13:58.472063 kubelet[3210]: I0509 00:13:58.472033 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268f32c3-73ca-4c84-b7d9-51f4983af55d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "268f32c3-73ca-4c84-b7d9-51f4983af55d" (UID: "268f32c3-73ca-4c84-b7d9-51f4983af55d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:13:58.472597 kubelet[3210]: I0509 00:13:58.472564 3210 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8814db05-ebc5-44b5-b235-cd6ff9228c57" (UID: "8814db05-ebc5-44b5-b235-cd6ff9228c57"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:13:58.551515 kubelet[3210]: I0509 00:13:58.551451 3210 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pcsbd\" (UniqueName: \"kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-kube-api-access-pcsbd\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551515 kubelet[3210]: I0509 00:13:58.551511 3210 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-lib-modules\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551515 kubelet[3210]: I0509 00:13:58.551522 3210 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-net\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551534 3210 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-bpf-maps\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551543 3210 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-config-path\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551736 3210 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-run\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551745 3210 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-hostproc\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551753 3210 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-etc-cni-netd\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551762 3210 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8814db05-ebc5-44b5-b235-cd6ff9228c57-clustermesh-secrets\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551770 3210 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-host-proc-sys-kernel\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.551961 kubelet[3210]: I0509 00:13:58.551778 3210 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/268f32c3-73ca-4c84-b7d9-51f4983af55d-cilium-config-path\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.552205 kubelet[3210]: I0509 00:13:58.551786 3210 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cilium-cgroup\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.552205 kubelet[3210]: I0509 00:13:58.551793 3210 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8814db05-ebc5-44b5-b235-cd6ff9228c57-hubble-tls\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.552205 kubelet[3210]: I0509 00:13:58.551801 3210 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-cni-path\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.552205 kubelet[3210]: I0509 00:13:58.551810 3210 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8814db05-ebc5-44b5-b235-cd6ff9228c57-xtables-lock\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.552205 kubelet[3210]: I0509 00:13:58.551821 3210 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s6nmh\" (UniqueName: \"kubernetes.io/projected/268f32c3-73ca-4c84-b7d9-51f4983af55d-kube-api-access-s6nmh\") on node \"ip-172-31-17-17\" DevicePath \"\"" May 9 00:13:58.777205 kubelet[3210]: I0509 00:13:58.776258 3210 scope.go:117] "RemoveContainer" containerID="50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd" May 9 00:13:58.784987 systemd[1]: Removed slice kubepods-besteffort-pod268f32c3_73ca_4c84_b7d9_51f4983af55d.slice - libcontainer container kubepods-besteffort-pod268f32c3_73ca_4c84_b7d9_51f4983af55d.slice. May 9 00:13:58.798203 containerd[1898]: time="2025-05-09T00:13:58.798120717Z" level=info msg="RemoveContainer for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\"" May 9 00:13:58.799576 systemd[1]: Removed slice kubepods-burstable-pod8814db05_ebc5_44b5_b235_cd6ff9228c57.slice - libcontainer container kubepods-burstable-pod8814db05_ebc5_44b5_b235_cd6ff9228c57.slice. May 9 00:13:58.799861 systemd[1]: kubepods-burstable-pod8814db05_ebc5_44b5_b235_cd6ff9228c57.slice: Consumed 8.425s CPU time. May 9 00:13:58.811306 containerd[1898]: time="2025-05-09T00:13:58.810949594Z" level=info msg="RemoveContainer for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" returns successfully" May 9 00:13:58.816863 kubelet[3210]: I0509 00:13:58.816735 3210 scope.go:117] "RemoveContainer" containerID="50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd" May 9 00:13:58.817747 containerd[1898]: time="2025-05-09T00:13:58.817645958Z" level=error msg="ContainerStatus for \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\": not found" May 9 00:13:58.837688 kubelet[3210]: E0509 00:13:58.837630 3210 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\": not found" containerID="50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd" May 9 00:13:58.837840 kubelet[3210]: I0509 00:13:58.837702 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd"} err="failed to get container status \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"50e51c8f853d6d907372de456fdbb78d871b7f6580f0038af8bb67e42ea619bd\": not found" May 9 00:13:58.837840 kubelet[3210]: I0509 00:13:58.837797 3210 scope.go:117] "RemoveContainer" containerID="68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21" May 9 00:13:58.839195 containerd[1898]: time="2025-05-09T00:13:58.839106113Z" level=info msg="RemoveContainer for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\"" May 9 00:13:58.845519 containerd[1898]: time="2025-05-09T00:13:58.845480207Z" level=info msg="RemoveContainer for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" returns successfully" May 9 00:13:58.846000 kubelet[3210]: I0509 00:13:58.845969 3210 scope.go:117] "RemoveContainer" containerID="310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af" May 9 00:13:58.847498 containerd[1898]: time="2025-05-09T00:13:58.847464581Z" level=info msg="RemoveContainer for \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\"" May 9 00:13:58.852802 containerd[1898]: time="2025-05-09T00:13:58.852741620Z" level=info msg="RemoveContainer for \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\" returns successfully" May 9 00:13:58.852985 kubelet[3210]: I0509 00:13:58.852958 3210 scope.go:117] "RemoveContainer" containerID="03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0" May 9 00:13:58.854403 containerd[1898]: time="2025-05-09T00:13:58.854364627Z" level=info msg="RemoveContainer for \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\"" May 9 00:13:58.859775 containerd[1898]: time="2025-05-09T00:13:58.859726410Z" level=info msg="RemoveContainer for \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\" returns successfully" May 9 00:13:58.860062 kubelet[3210]: I0509 00:13:58.860036 3210 scope.go:117] "RemoveContainer" containerID="1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57" May 9 00:13:58.861196 containerd[1898]: time="2025-05-09T00:13:58.861135005Z" level=info msg="RemoveContainer for \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\"" May 9 00:13:58.867025 containerd[1898]: time="2025-05-09T00:13:58.866978527Z" level=info msg="RemoveContainer for \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\" returns successfully" May 9 00:13:58.867294 kubelet[3210]: I0509 00:13:58.867233 3210 scope.go:117] "RemoveContainer" containerID="da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe" May 9 00:13:58.868455 containerd[1898]: time="2025-05-09T00:13:58.868402968Z" level=info msg="RemoveContainer for \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\"" May 9 00:13:58.873349 containerd[1898]: time="2025-05-09T00:13:58.873310613Z" level=info msg="RemoveContainer for \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\" returns successfully" May 9 00:13:58.873674 kubelet[3210]: I0509 00:13:58.873550 3210 scope.go:117] "RemoveContainer" containerID="68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21" May 9 00:13:58.873830 containerd[1898]: time="2025-05-09T00:13:58.873794863Z" level=error msg="ContainerStatus for \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\": not found" May 9 00:13:58.873979 kubelet[3210]: E0509 00:13:58.873953 3210 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\": not found" containerID="68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21" May 9 00:13:58.874031 kubelet[3210]: I0509 00:13:58.873979 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21"} err="failed to get container status \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\": rpc error: code = NotFound desc = an error occurred when try to find container \"68f320422bba5039f0a7bf1b05b37d85412c26e0709490c2b5b610124c1bdf21\": not found" May 9 00:13:58.874031 kubelet[3210]: I0509 00:13:58.873999 3210 scope.go:117] "RemoveContainer" containerID="310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af" May 9 00:13:58.874358 containerd[1898]: time="2025-05-09T00:13:58.874210808Z" level=error msg="ContainerStatus for \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\": not found" May 9 00:13:58.874747 kubelet[3210]: E0509 00:13:58.874448 3210 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\": not found" containerID="310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af" May 9 00:13:58.874747 kubelet[3210]: I0509 00:13:58.874470 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af"} err="failed to get container status \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\": rpc error: code = NotFound desc = an error occurred when try to find container \"310ba07ee1f9df268bb3e5ff41b18df89396f816d970e0779ab2a1a13d9d28af\": not found" May 9 00:13:58.874747 kubelet[3210]: I0509 00:13:58.874486 3210 scope.go:117] "RemoveContainer" containerID="03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0" May 9 00:13:58.874890 containerd[1898]: time="2025-05-09T00:13:58.874650133Z" level=error msg="ContainerStatus for \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\": not found" May 9 00:13:58.874951 kubelet[3210]: E0509 00:13:58.874896 3210 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\": not found" containerID="03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0" May 9 00:13:58.874984 kubelet[3210]: I0509 00:13:58.874949 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0"} err="failed to get container status \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"03850702479a4fd5015d46f71c6494bfc03703078df95faf55d8098cb44ea9b0\": not found" May 9 00:13:58.874984 kubelet[3210]: I0509 00:13:58.874965 3210 scope.go:117] "RemoveContainer" containerID="1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57" May 9 00:13:58.875343 containerd[1898]: time="2025-05-09T00:13:58.875290240Z" level=error msg="ContainerStatus for \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\": not found" May 9 00:13:58.875811 containerd[1898]: time="2025-05-09T00:13:58.875647775Z" level=error msg="ContainerStatus for \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\": not found" May 9 00:13:58.875852 kubelet[3210]: E0509 00:13:58.875406 3210 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\": not found" containerID="1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57" May 9 00:13:58.875852 kubelet[3210]: I0509 00:13:58.875424 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57"} err="failed to get container status \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f8cb6b94e8c18ec5724db87dd781a9f680170b3a3e3a837e31b218b0ae78a57\": not found" May 9 00:13:58.875852 kubelet[3210]: I0509 00:13:58.875438 3210 scope.go:117] "RemoveContainer" containerID="da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe" May 9 00:13:58.875852 kubelet[3210]: E0509 00:13:58.875771 3210 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\": not found" containerID="da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe" May 9 00:13:58.875852 kubelet[3210]: I0509 00:13:58.875804 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe"} err="failed to get container status \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\": rpc error: code = NotFound desc = an error occurred when try to find container \"da95c5bb53fdf9d74299750a789a0681b1c92fb6bed5b222ad0abb148e029efe\": not found" May 9 00:13:59.057854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df-rootfs.mount: Deactivated successfully. May 9 00:13:59.058217 systemd[1]: var-lib-kubelet-pods-268f32c3\x2d73ca\x2d4c84\x2db7d9\x2d51f4983af55d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds6nmh.mount: Deactivated successfully. May 9 00:13:59.058359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce-rootfs.mount: Deactivated successfully. May 9 00:13:59.058457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce-shm.mount: Deactivated successfully. May 9 00:13:59.058548 systemd[1]: var-lib-kubelet-pods-8814db05\x2debc5\x2d44b5\x2db235\x2dcd6ff9228c57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpcsbd.mount: Deactivated successfully. May 9 00:13:59.058635 systemd[1]: var-lib-kubelet-pods-8814db05\x2debc5\x2d44b5\x2db235\x2dcd6ff9228c57-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:13:59.058728 systemd[1]: var-lib-kubelet-pods-8814db05\x2debc5\x2d44b5\x2db235\x2dcd6ff9228c57-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:13:59.449407 kubelet[3210]: I0509 00:13:59.449362 3210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268f32c3-73ca-4c84-b7d9-51f4983af55d" path="/var/lib/kubelet/pods/268f32c3-73ca-4c84-b7d9-51f4983af55d/volumes" May 9 00:13:59.449798 kubelet[3210]: I0509 00:13:59.449779 3210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" path="/var/lib/kubelet/pods/8814db05-ebc5-44b5-b235-cd6ff9228c57/volumes" May 9 00:13:59.539491 kubelet[3210]: E0509 00:13:59.539366 3210 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:13:59.935183 sshd[4895]: Connection closed by 139.178.68.195 port 44682 May 9 00:13:59.936354 sshd-session[4893]: pam_unix(sshd:session): session closed for user core May 9 00:13:59.942236 systemd[1]: sshd@22-172.31.17.17:22-139.178.68.195:44682.service: Deactivated successfully. May 9 00:13:59.944626 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:13:59.945477 systemd-logind[1882]: Session 23 logged out. Waiting for processes to exit. May 9 00:13:59.946846 systemd-logind[1882]: Removed session 23. May 9 00:13:59.974523 systemd[1]: Started sshd@23-172.31.17.17:22-139.178.68.195:44696.service - OpenSSH per-connection server daemon (139.178.68.195:44696). May 9 00:14:00.240941 sshd[5055]: Accepted publickey for core from 139.178.68.195 port 44696 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:14:00.244650 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:00.253296 systemd-logind[1882]: New session 24 of user core. May 9 00:14:00.261505 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:14:00.297998 ntpd[1874]: Deleting interface #12 lxc_health, fe80::f8bf:7fff:fe8c:74a7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs May 9 00:14:00.298453 ntpd[1874]: 9 May 00:14:00 ntpd[1874]: Deleting interface #12 lxc_health, fe80::f8bf:7fff:fe8c:74a7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs May 9 00:14:01.037636 sshd[5057]: Connection closed by 139.178.68.195 port 44696 May 9 00:14:01.039178 sshd-session[5055]: pam_unix(sshd:session): session closed for user core May 9 00:14:01.046640 systemd[1]: sshd@23-172.31.17.17:22-139.178.68.195:44696.service: Deactivated successfully. May 9 00:14:01.052540 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:14:01.055841 systemd-logind[1882]: Session 24 logged out. Waiting for processes to exit. May 9 00:14:01.058241 kubelet[3210]: I0509 00:14:01.058086 3210 topology_manager.go:215] "Topology Admit Handler" podUID="12e5ec88-3478-4780-b3a4-a354f0529d7f" podNamespace="kube-system" podName="cilium-8qfv4" May 9 00:14:01.058620 kubelet[3210]: E0509 00:14:01.058280 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" containerName="mount-bpf-fs" May 9 00:14:01.058620 kubelet[3210]: E0509 00:14:01.058301 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" containerName="clean-cilium-state" May 9 00:14:01.058620 kubelet[3210]: E0509 00:14:01.058313 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="268f32c3-73ca-4c84-b7d9-51f4983af55d" containerName="cilium-operator" May 9 00:14:01.058620 kubelet[3210]: E0509 00:14:01.058331 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" containerName="mount-cgroup" May 9 00:14:01.058620 kubelet[3210]: E0509 00:14:01.058340 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" containerName="apply-sysctl-overwrites" May 9 00:14:01.058620 kubelet[3210]: E0509 00:14:01.058350 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" containerName="cilium-agent" May 9 00:14:01.058620 kubelet[3210]: I0509 00:14:01.058393 3210 memory_manager.go:354] "RemoveStaleState removing state" podUID="8814db05-ebc5-44b5-b235-cd6ff9228c57" containerName="cilium-agent" May 9 00:14:01.058620 kubelet[3210]: I0509 00:14:01.058402 3210 memory_manager.go:354] "RemoveStaleState removing state" podUID="268f32c3-73ca-4c84-b7d9-51f4983af55d" containerName="cilium-operator" May 9 00:14:01.086306 systemd[1]: Started sshd@24-172.31.17.17:22-139.178.68.195:44712.service - OpenSSH per-connection server daemon (139.178.68.195:44712). May 9 00:14:01.089621 systemd-logind[1882]: Removed session 24. May 9 00:14:01.112670 systemd[1]: Created slice kubepods-burstable-pod12e5ec88_3478_4780_b3a4_a354f0529d7f.slice - libcontainer container kubepods-burstable-pod12e5ec88_3478_4780_b3a4_a354f0529d7f.slice. May 9 00:14:01.198610 kubelet[3210]: I0509 00:14:01.198554 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-cni-path\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.199022 kubelet[3210]: I0509 00:14:01.198994 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12e5ec88-3478-4780-b3a4-a354f0529d7f-cilium-config-path\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.199150 kubelet[3210]: I0509 00:14:01.199132 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-etc-cni-netd\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.199949 kubelet[3210]: I0509 00:14:01.199923 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-xtables-lock\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200086 kubelet[3210]: I0509 00:14:01.200068 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12e5ec88-3478-4780-b3a4-a354f0529d7f-clustermesh-secrets\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200211 kubelet[3210]: I0509 00:14:01.200194 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-cilium-run\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200302 kubelet[3210]: I0509 00:14:01.200287 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-cilium-cgroup\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200744 kubelet[3210]: I0509 00:14:01.200457 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-lib-modules\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200744 kubelet[3210]: I0509 00:14:01.200483 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44khw\" (UniqueName: \"kubernetes.io/projected/12e5ec88-3478-4780-b3a4-a354f0529d7f-kube-api-access-44khw\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200744 kubelet[3210]: I0509 00:14:01.200502 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12e5ec88-3478-4780-b3a4-a354f0529d7f-hubble-tls\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200744 kubelet[3210]: I0509 00:14:01.200524 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-hostproc\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200744 kubelet[3210]: I0509 00:14:01.200547 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12e5ec88-3478-4780-b3a4-a354f0529d7f-cilium-ipsec-secrets\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200744 kubelet[3210]: I0509 00:14:01.200564 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-host-proc-sys-kernel\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200951 kubelet[3210]: I0509 00:14:01.200585 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-bpf-maps\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.200951 kubelet[3210]: I0509 00:14:01.200607 3210 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12e5ec88-3478-4780-b3a4-a354f0529d7f-host-proc-sys-net\") pod \"cilium-8qfv4\" (UID: \"12e5ec88-3478-4780-b3a4-a354f0529d7f\") " pod="kube-system/cilium-8qfv4" May 9 00:14:01.267779 sshd[5067]: Accepted publickey for core from 139.178.68.195 port 44712 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:14:01.273148 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:01.292789 systemd-logind[1882]: New session 25 of user core. May 9 00:14:01.309371 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:14:01.420431 containerd[1898]: time="2025-05-09T00:14:01.420234349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qfv4,Uid:12e5ec88-3478-4780-b3a4-a354f0529d7f,Namespace:kube-system,Attempt:0,}" May 9 00:14:01.466196 sshd[5071]: Connection closed by 139.178.68.195 port 44712 May 9 00:14:01.466673 sshd-session[5067]: pam_unix(sshd:session): session closed for user core May 9 00:14:01.525676 systemd[1]: sshd@24-172.31.17.17:22-139.178.68.195:44712.service: Deactivated successfully. May 9 00:14:01.537050 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:14:01.578975 systemd-logind[1882]: Session 25 logged out. Waiting for processes to exit. May 9 00:14:01.596292 systemd-logind[1882]: Removed session 25. May 9 00:14:01.614322 systemd[1]: Started sshd@25-172.31.17.17:22-139.178.68.195:44716.service - OpenSSH per-connection server daemon (139.178.68.195:44716). May 9 00:14:01.631186 containerd[1898]: time="2025-05-09T00:14:01.629504487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:01.631186 containerd[1898]: time="2025-05-09T00:14:01.629585402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:01.631186 containerd[1898]: time="2025-05-09T00:14:01.629847885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:01.631186 containerd[1898]: time="2025-05-09T00:14:01.629976335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:01.681457 systemd[1]: Started cri-containerd-8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c.scope - libcontainer container 8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c. May 9 00:14:01.727836 containerd[1898]: time="2025-05-09T00:14:01.726983874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qfv4,Uid:12e5ec88-3478-4780-b3a4-a354f0529d7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\"" May 9 00:14:01.739570 containerd[1898]: time="2025-05-09T00:14:01.739136229Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:14:01.772865 containerd[1898]: time="2025-05-09T00:14:01.772705939Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388\"" May 9 00:14:01.774962 containerd[1898]: time="2025-05-09T00:14:01.773814263Z" level=info msg="StartContainer for \"0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388\"" May 9 00:14:01.785144 kubelet[3210]: I0509 00:14:01.785066 3210 setters.go:580] "Node became not ready" node="ip-172-31-17-17" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T00:14:01Z","lastTransitionTime":"2025-05-09T00:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 00:14:01.893938 sshd[5089]: Accepted publickey for core from 139.178.68.195 port 44716 ssh2: RSA SHA256:y3UpFgxG5inTLymYisnz3SN0SMgtyCTuF41rOie/RIM May 9 00:14:01.918605 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:02.028198 systemd-logind[1882]: New session 26 of user core. May 9 00:14:02.045543 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 00:14:02.298326 systemd[1]: Started cri-containerd-0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388.scope - libcontainer container 0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388. May 9 00:14:02.432851 containerd[1898]: time="2025-05-09T00:14:02.432796614Z" level=info msg="StartContainer for \"0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388\" returns successfully" May 9 00:14:02.473095 systemd[1]: cri-containerd-0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388.scope: Deactivated successfully. May 9 00:14:02.536962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388-rootfs.mount: Deactivated successfully. May 9 00:14:02.570788 containerd[1898]: time="2025-05-09T00:14:02.569937940Z" level=info msg="shim disconnected" id=0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388 namespace=k8s.io May 9 00:14:02.570788 containerd[1898]: time="2025-05-09T00:14:02.570000215Z" level=warning msg="cleaning up after shim disconnected" id=0069f2105b498a9fa0f3327c0f3609cd3530487bedb8368e7548a8fcb874e388 namespace=k8s.io May 9 00:14:02.570788 containerd[1898]: time="2025-05-09T00:14:02.570011787Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:02.855340 containerd[1898]: time="2025-05-09T00:14:02.853285023Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:14:02.899524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078038884.mount: Deactivated successfully. May 9 00:14:02.905441 containerd[1898]: time="2025-05-09T00:14:02.904434390Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596\"" May 9 00:14:02.907008 containerd[1898]: time="2025-05-09T00:14:02.906950788Z" level=info msg="StartContainer for \"53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596\"" May 9 00:14:02.954651 systemd[1]: Started cri-containerd-53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596.scope - libcontainer container 53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596. May 9 00:14:03.018223 containerd[1898]: time="2025-05-09T00:14:03.018058978Z" level=info msg="StartContainer for \"53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596\" returns successfully" May 9 00:14:03.273898 systemd[1]: cri-containerd-53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596.scope: Deactivated successfully. May 9 00:14:03.313090 containerd[1898]: time="2025-05-09T00:14:03.313007763Z" level=info msg="shim disconnected" id=53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596 namespace=k8s.io May 9 00:14:03.313090 containerd[1898]: time="2025-05-09T00:14:03.313076509Z" level=warning msg="cleaning up after shim disconnected" id=53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596 namespace=k8s.io May 9 00:14:03.313090 containerd[1898]: time="2025-05-09T00:14:03.313090205Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:03.315632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53f40c5003ec32356e417d59d63c14c78c388f2d55a0fd3f8489fe01a345d596-rootfs.mount: Deactivated successfully. May 9 00:14:03.847481 containerd[1898]: time="2025-05-09T00:14:03.847425497Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:14:03.878409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118352609.mount: Deactivated successfully. May 9 00:14:03.888831 containerd[1898]: time="2025-05-09T00:14:03.888763329Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696\"" May 9 00:14:03.890531 containerd[1898]: time="2025-05-09T00:14:03.889305925Z" level=info msg="StartContainer for \"57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696\"" May 9 00:14:03.929492 systemd[1]: Started cri-containerd-57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696.scope - libcontainer container 57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696. May 9 00:14:03.968539 containerd[1898]: time="2025-05-09T00:14:03.968343120Z" level=info msg="StartContainer for \"57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696\" returns successfully" May 9 00:14:03.978555 systemd[1]: cri-containerd-57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696.scope: Deactivated successfully. May 9 00:14:04.026051 containerd[1898]: time="2025-05-09T00:14:04.025985975Z" level=info msg="shim disconnected" id=57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696 namespace=k8s.io May 9 00:14:04.026385 containerd[1898]: time="2025-05-09T00:14:04.026352738Z" level=warning msg="cleaning up after shim disconnected" id=57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696 namespace=k8s.io May 9 00:14:04.026385 containerd[1898]: time="2025-05-09T00:14:04.026377863Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:04.313450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57c9a40fff8a239e8bd31d3e2e9fcbd063f43d87e796077edaa25aad46a1b696-rootfs.mount: Deactivated successfully. May 9 00:14:04.540918 kubelet[3210]: E0509 00:14:04.540804 3210 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:14:04.852985 containerd[1898]: time="2025-05-09T00:14:04.852946747Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:14:04.881720 containerd[1898]: time="2025-05-09T00:14:04.881672330Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529\"" May 9 00:14:04.882339 containerd[1898]: time="2025-05-09T00:14:04.882112334Z" level=info msg="StartContainer for \"868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529\"" May 9 00:14:04.919378 systemd[1]: Started cri-containerd-868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529.scope - libcontainer container 868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529. May 9 00:14:04.950612 systemd[1]: cri-containerd-868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529.scope: Deactivated successfully. May 9 00:14:04.957973 containerd[1898]: time="2025-05-09T00:14:04.957888064Z" level=info msg="StartContainer for \"868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529\" returns successfully" May 9 00:14:04.968518 containerd[1898]: time="2025-05-09T00:14:04.956342164Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e5ec88_3478_4780_b3a4_a354f0529d7f.slice/cri-containerd-868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529.scope/memory.events\": no such file or directory" May 9 00:14:04.987112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529-rootfs.mount: Deactivated successfully. May 9 00:14:05.002151 containerd[1898]: time="2025-05-09T00:14:05.002072833Z" level=info msg="shim disconnected" id=868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529 namespace=k8s.io May 9 00:14:05.002151 containerd[1898]: time="2025-05-09T00:14:05.002130591Z" level=warning msg="cleaning up after shim disconnected" id=868b5865e1b3d20a79c702371c6b542e2252f9e0efa44e343fdf839d6d04a529 namespace=k8s.io May 9 00:14:05.002151 containerd[1898]: time="2025-05-09T00:14:05.002142664Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:05.857696 containerd[1898]: time="2025-05-09T00:14:05.857648458Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:14:05.887884 containerd[1898]: time="2025-05-09T00:14:05.887840419Z" level=info msg="CreateContainer within sandbox \"8448c2997995314c4075584eef39ce03f6114dc0a9a377550c8c5241984ac11c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c269ffc54e697f5afae9b00e80a8b4fc209c9def7d71cd483c11801c9345547d\"" May 9 00:14:05.888775 containerd[1898]: time="2025-05-09T00:14:05.888735827Z" level=info msg="StartContainer for \"c269ffc54e697f5afae9b00e80a8b4fc209c9def7d71cd483c11801c9345547d\"" May 9 00:14:05.931518 systemd[1]: Started cri-containerd-c269ffc54e697f5afae9b00e80a8b4fc209c9def7d71cd483c11801c9345547d.scope - libcontainer container c269ffc54e697f5afae9b00e80a8b4fc209c9def7d71cd483c11801c9345547d. May 9 00:14:05.972371 containerd[1898]: time="2025-05-09T00:14:05.972319206Z" level=info msg="StartContainer for \"c269ffc54e697f5afae9b00e80a8b4fc209c9def7d71cd483c11801c9345547d\" returns successfully" May 9 00:14:06.669371 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:14:09.887306 systemd-networkd[1815]: lxc_health: Link UP May 9 00:14:09.890657 (udev-worker)[5929]: Network interface NamePolicy= disabled on kernel command line. May 9 00:14:09.894967 systemd-networkd[1815]: lxc_health: Gained carrier May 9 00:14:11.272883 systemd-networkd[1815]: lxc_health: Gained IPv6LL May 9 00:14:11.454483 kubelet[3210]: I0509 00:14:11.454416 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8qfv4" podStartSLOduration=10.454392305 podStartE2EDuration="10.454392305s" podCreationTimestamp="2025-05-09 00:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:06.88913113 +0000 UTC m=+97.618243772" watchObservedRunningTime="2025-05-09 00:14:11.454392305 +0000 UTC m=+102.183504948" May 9 00:14:13.297739 ntpd[1874]: Listen normally on 15 lxc_health [fe80::7487:c9ff:fe43:f0a6%14]:123 May 9 00:14:13.298221 ntpd[1874]: 9 May 00:14:13 ntpd[1874]: Listen normally on 15 lxc_health [fe80::7487:c9ff:fe43:f0a6%14]:123 May 9 00:14:17.922911 sshd[5132]: Connection closed by 139.178.68.195 port 44716 May 9 00:14:17.924107 sshd-session[5089]: pam_unix(sshd:session): session closed for user core May 9 00:14:17.928071 systemd-logind[1882]: Session 26 logged out. Waiting for processes to exit. May 9 00:14:17.929193 systemd[1]: sshd@25-172.31.17.17:22-139.178.68.195:44716.service: Deactivated successfully. May 9 00:14:17.931308 systemd[1]: session-26.scope: Deactivated successfully. May 9 00:14:17.932055 systemd-logind[1882]: Removed session 26. May 9 00:14:29.449862 containerd[1898]: time="2025-05-09T00:14:29.449815402Z" level=info msg="StopPodSandbox for \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\"" May 9 00:14:29.450483 containerd[1898]: time="2025-05-09T00:14:29.449926440Z" level=info msg="TearDown network for sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" successfully" May 9 00:14:29.450483 containerd[1898]: time="2025-05-09T00:14:29.449942284Z" level=info msg="StopPodSandbox for \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" returns successfully" May 9 00:14:29.450483 containerd[1898]: time="2025-05-09T00:14:29.450379163Z" level=info msg="RemovePodSandbox for \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\"" May 9 00:14:29.450483 containerd[1898]: time="2025-05-09T00:14:29.450420870Z" level=info msg="Forcibly stopping sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\"" May 9 00:14:29.450769 containerd[1898]: time="2025-05-09T00:14:29.450483296Z" level=info msg="TearDown network for sandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" successfully" May 9 00:14:29.457388 containerd[1898]: time="2025-05-09T00:14:29.457332775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:14:29.457542 containerd[1898]: time="2025-05-09T00:14:29.457417955Z" level=info msg="RemovePodSandbox \"ce60ae90a2efa3cc12a3db0147cc3a54ad38ea16e815027868572321dce84bce\" returns successfully" May 9 00:14:29.458051 containerd[1898]: time="2025-05-09T00:14:29.458018875Z" level=info msg="StopPodSandbox for \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\"" May 9 00:14:29.458204 containerd[1898]: time="2025-05-09T00:14:29.458117834Z" level=info msg="TearDown network for sandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" successfully" May 9 00:14:29.458204 containerd[1898]: time="2025-05-09T00:14:29.458132693Z" level=info msg="StopPodSandbox for \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" returns successfully" May 9 00:14:29.458587 containerd[1898]: time="2025-05-09T00:14:29.458558797Z" level=info msg="RemovePodSandbox for \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\"" May 9 00:14:29.458661 containerd[1898]: time="2025-05-09T00:14:29.458587119Z" level=info msg="Forcibly stopping sandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\"" May 9 00:14:29.458708 containerd[1898]: time="2025-05-09T00:14:29.458655162Z" level=info msg="TearDown network for sandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" successfully" May 9 00:14:29.464299 containerd[1898]: time="2025-05-09T00:14:29.464256905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:14:29.464707 containerd[1898]: time="2025-05-09T00:14:29.464667408Z" level=info msg="RemovePodSandbox \"b275d77424b117d30426eb52189e44f654c594dc81e8fe21aa7772445b8152df\" returns successfully" May 9 00:14:55.482341 systemd[1]: cri-containerd-ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6.scope: Deactivated successfully. May 9 00:14:55.482582 systemd[1]: cri-containerd-ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6.scope: Consumed 3.157s CPU time, 24.9M memory peak, 0B memory swap peak. May 9 00:14:55.511876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6-rootfs.mount: Deactivated successfully. May 9 00:14:55.524157 containerd[1898]: time="2025-05-09T00:14:55.523940827Z" level=info msg="shim disconnected" id=ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6 namespace=k8s.io May 9 00:14:55.524157 containerd[1898]: time="2025-05-09T00:14:55.523992617Z" level=warning msg="cleaning up after shim disconnected" id=ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6 namespace=k8s.io May 9 00:14:55.524157 containerd[1898]: time="2025-05-09T00:14:55.524001166Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:55.982586 kubelet[3210]: I0509 00:14:55.982531 3210 scope.go:117] "RemoveContainer" containerID="ea911f18bddbe7e1c57b17b76cedc3a8b49f66319787abc1b37f62a3239098c6" May 9 00:14:55.989670 containerd[1898]: time="2025-05-09T00:14:55.989628719Z" level=info msg="CreateContainer within sandbox \"f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 9 00:14:56.015204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894267960.mount: Deactivated successfully. May 9 00:14:56.020237 containerd[1898]: time="2025-05-09T00:14:56.020052121Z" level=info msg="CreateContainer within sandbox \"f10468699a2c5cc67809f08138561b1efc5cb37172896323ce8a5df32d114d1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"469af10551126f4f6908dea76c0d0a7901f5315913df9d33004ee64b06f4de4d\"" May 9 00:14:56.021005 containerd[1898]: time="2025-05-09T00:14:56.020889739Z" level=info msg="StartContainer for \"469af10551126f4f6908dea76c0d0a7901f5315913df9d33004ee64b06f4de4d\"" May 9 00:14:56.064605 systemd[1]: Started cri-containerd-469af10551126f4f6908dea76c0d0a7901f5315913df9d33004ee64b06f4de4d.scope - libcontainer container 469af10551126f4f6908dea76c0d0a7901f5315913df9d33004ee64b06f4de4d. May 9 00:14:56.130856 containerd[1898]: time="2025-05-09T00:14:56.130800985Z" level=info msg="StartContainer for \"469af10551126f4f6908dea76c0d0a7901f5315913df9d33004ee64b06f4de4d\" returns successfully" May 9 00:15:00.324471 systemd[1]: cri-containerd-715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe.scope: Deactivated successfully. May 9 00:15:00.326944 systemd[1]: cri-containerd-715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe.scope: Consumed 1.732s CPU time, 18.3M memory peak, 0B memory swap peak. May 9 00:15:00.436762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe-rootfs.mount: Deactivated successfully. May 9 00:15:00.457158 containerd[1898]: time="2025-05-09T00:15:00.457072386Z" level=info msg="shim disconnected" id=715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe namespace=k8s.io May 9 00:15:00.457158 containerd[1898]: time="2025-05-09T00:15:00.457133771Z" level=warning msg="cleaning up after shim disconnected" id=715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe namespace=k8s.io May 9 00:15:00.457158 containerd[1898]: time="2025-05-09T00:15:00.457148059Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:15:00.995133 kubelet[3210]: I0509 00:15:00.995096 3210 scope.go:117] "RemoveContainer" containerID="715332a1ca124db750d776f880e1946f5302c6c059ce5bf1de010fea5beceefe" May 9 00:15:01.000398 containerd[1898]: time="2025-05-09T00:15:01.000350268Z" level=info msg="CreateContainer within sandbox \"30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 9 00:15:01.030865 containerd[1898]: time="2025-05-09T00:15:01.030810319Z" level=info msg="CreateContainer within sandbox \"30d3393cd719107b466ae9fdba0c473fd4e3394470d48df86a2138f0791c8b5c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"136056e03a907d35da553fe800f145ff469bc9162518568e60184d28d32758ba\"" May 9 00:15:01.031458 containerd[1898]: time="2025-05-09T00:15:01.031422303Z" level=info msg="StartContainer for \"136056e03a907d35da553fe800f145ff469bc9162518568e60184d28d32758ba\"" May 9 00:15:01.074684 systemd[1]: Started cri-containerd-136056e03a907d35da553fe800f145ff469bc9162518568e60184d28d32758ba.scope - libcontainer container 136056e03a907d35da553fe800f145ff469bc9162518568e60184d28d32758ba. May 9 00:15:01.175337 containerd[1898]: time="2025-05-09T00:15:01.175098695Z" level=info msg="StartContainer for \"136056e03a907d35da553fe800f145ff469bc9162518568e60184d28d32758ba\" returns successfully" May 9 00:15:02.071541 kubelet[3210]: E0509 00:15:02.067360 3210 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-17?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 9 00:15:12.069110 kubelet[3210]: E0509 00:15:12.068486 3210 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-17?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"